Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

[email protected] +972 52-548-6969

Posts: 7,527
|
Comments: 51,162
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 376 words

Date range queries can be quite expensive for RavenDB. Consider the following query:

from index 'Users/Search'
where search(DisplayName, "Oren") and CreationDate between "2008-10-13T07:18:01.623" and "2018-10-13T07:18:01.623"
include timings()

The root issue is that we have a compound query here, we use full text search on the left but then need to match it on the right. The way Lucene works, we have to compute the set of all the documents that match the date range. If we have a lot of documents in that range, we have to scan through a lot of values here.

We spent a lot of time and effort optimizing date queries in RavenDB. Such issues also impacted heavily the design of our next-gen indexing capabilities (but more on that when it matures enough to discuss).

One of the primary design principles of RavenDB is that it learns from previous usage, and we realized that date ranges in queries are likely to repeat often. So we take advantage of that. The details are a bit complex and require that you’ll understand how Lucene stores its data in immutable segments. We are able to analyze queries on repeating date ranges and remember them, so next time we use the same type of date range, we’ll already have the set of matching documents ready.

That feature was deployed to address a specific customer scenario, where they do a lot of wide date range queries and it had a big impact on that.

Last week we ran into some funny metrics for a completely different customer, with a very different scenario. You can probably tell at what point they moved to the updated version of RavenDB and were able to take advantage of this feature:

image

The really nice thing about this, from my perspective, is that none of us even considered the impact that feature would have for this scenario. They upgraded to the latest version to get access to the new features, and this is just sitting in the background, pushing their CPU utilization to near zero.

That’s the kind of news that I love to get.

time to read 3 min | 413 words

This series has been going on quite a bit longer than I intended it to be. Barring any new developments or questions, I think that this will be the last post I’ll write on the topic.

In a previous post, I implemented authenticated encryption, ensuring that we’ll be able to clearly detect if the ciphertext we got has been modified along the way. That is relevant because we have to think about malicious actors but we also have to consider things like bit flips, random hardware errors, etc.

In most crypto systems, we have to pass some metadata about the encrypted messages that we pass. Right now, this is strictly outside our scope, but it turns out that there is a compelling reason to consider that plain text data as well. For example, let’s say that I’m sending a number of messages. I have to include the message length and its position in the set of messages I sent in the clear. Otherwise, the receiver might not be able to make sense of that. When we need to decrypt the message, we want to include that additional information (which wasn’t encrypted) as well.

The reason for that is simple, we want to ensure that the other data hasn’t been modified using the same cryptographic tools we already have. It turns out that this is quite simple, check out the code:

We added a parameter for the encryptSivWithNonce() and decrypt() functions that has the buffer of all the associated data for this message. And all we need to do is add that to the MAC computation as well. On the decrypt(), we do the exact same. We compute the hash from the encrypted text and the additional data and abort if they aren’t an exact match.

And with this in place, we implemented a (probably very bad) encryption system from a single primitive (MD5) and brought it to roughly modern standards of AEAD encryption (Authenticated Encryption, Additional Data).

I want to emphasize that this entire series is meant primarily to go over the details of how you build and use an encryption system, not to actually build a real one. I didn’t do any analysis on how secure such a system would be, and I wouldn’t want to trust this with anything beyond toying around.

If you have any references on similar systems, I would be very happy to learn about that, I doubt that I’m the first person who tried to build stream cipher from MD5, after all.

time to read 3 min | 447 words

I mentioned in a previous post that nonce reuse is a serious problem. It is enough to reuse a nonce even once and you can have catastrophic results at your hands. The problem occurs, interestingly enough, when we are able to capture two separate messages generated with the same key & nonce. If we capture the same message twice, that is not an issue (we can XOR the values, they will be all zeroes).

The question is whether there is something that can be done about this. The answer to that is yes, we can create a construction that would be safe from nonce reuse. This is called SIV mode (Syntactic IV).

The way to do that is to make the nonce itself depend on the value that we are encrypting. Take a look at the following code:

The idea is that we get a nonce, as usual, but instead of relying on the nonce, we’ll compute a hash of the plain text using the nonce as a key. Aside from that, the rest of the system behaves as usual. In particular, there are no changes to the decryption.

Let’s see what the results are, shall we?

With a separate nonce per use:

attack at dawn! 838E1CE1A64D97E114237DE161A544DA5030FC5ECAB1C2
0D34AF838634D1C591AE208FC0AEE706690669E9F56F45C1
attack at dusk! EEA7DE8A51A06FE6CA9374CDDEC1053249F8B1F0BF1995A
0EEEE7D6EBF68868ECAE7CBEFD6EE23017480ACD494D634

Now, what happens when we use the same nonce?

attack at dusk! 0442EFA977919327C92B47C7F6A0CD617AE4FD3138DF07D4
5994EBC2C4B709ACDE1130422924B7206354D03569FDAA
attack at dawn! 324A996C22F7FFDE62596C0E9EE37D7EE1F89569A10A1188B
A4A03EE7B8C47DF347A20D1B73EB4523D3511F2F46FF2

As you can see, the values are completely different, even though we used the same key and nonce and the plain text is mostly the same.

Because we are generating the actual nonce we use from the hash of the input, reusing the nonce with different data will result in a wildly different nonce being actually used. We are safe from the catastrophe that is nonce reuse.

With SIV mode, we are paying with an additional hashing pass over the plain text data. In return, we have the following guarantees:

  • If the nonce isn’t reused, we have nothing to worry about.
  • If the nonce is reused, an attacker will not be able to learn anything about the content of the message.
  • However, an attacker may be able to detect that the same message is being sent, if the nonce is reused.

Given the cost of nonce reuse without SIV, it is gratifying to see that the worst case scenario for SIV with nonce reuse is that an adversary can detect duplicate messages.

I’m not sure how up to date this is, but this report shows that SIV adds about 1.5 – 2.2 cycles to the overall cost of encryption. Note that this is for actual viable implementations, instead of what I’m doing, which is by design, not a good idea.

time to read 2 min | 334 words

In the previous post, I showed some code that compared two MAC values (binary buffers) and I mentioned that the manner in which I did that was bad.

Here is the code in question:

When you are looking at code that is used in a cryptographic context, you should be aware that any call that compares buffers (or strings) cannot short circuit. What do I mean by that? Let’s look at the implementation of those two functions:

Those two functions are doing the same thing, but in a very different manner. The issue with eql() is that it will stop at the first mismatch byte, while timingSafeEql() will always scan through the two buffers first and then return the result.

Why do we need that?

Well, the issue is that if I can time (and you can, even over the network) the duration of a call like that, I’ll be able to test various values until I match whatever secret value the code is comparing against. In this case, I don’t believe that the use of eql() is an actual problem. We tested that on the output of HMAC operation vs. the expected value. The caller has no way to control the HMAC computation and already knows what we are comparing against. I can’t think of any reason where that would be a problem. However, I’m not a cryptographer and any call to buffer comparison in crypto related code should use a constant time method.

For that matter, side channels are a huge worry in cryptography. AES, for example, is nearly impossible to implement in software at this point, because it is vulnerable to timings side channels and requires the hardware to help here. Other side channels include watching caches, power signatures and more. I don’t actually have much to say about this, except that when working with cryptography, even something as simple as multiplication is suspect, because it may not complete in constant time. As a good example of the problem, see this page.

time to read 7 min | 1273 words

In the previous post I showed how we can mess around with the encrypted text, resulting in a valid (but not the original) plain text. We can use that for many nefarious reasons, as you can imagine. Luckily, there is a straightforward solution for this issue. We can implement something called MAC (message authentication code) to ensure that the encrypted data wasn’t tampered with. That is pretty easy to do, actually, since we have HMAC already, which is meant for exactly this purpose.

The issue here is an interesting one, what shall we sign? Here are the options:

  1. Sign the plain text of the message, using a hashed key function (HMAC-MD5, in our case). Because we are using the secret key to compute the hash, just looking at the hashed value will not tell us anything about the plain text (for example, if we were using just MD5, we could use rainbow tables to figure out what the plain text was from the hash). Since there is no security issue with making the signature public, we can just append that to the output of the encryption as plain text. At least, I don’t think it does. I’ll remind you again that I’m not a cryptographer by trade.
  2. Sign the plain text of the message (using a hashed key function or a regular cryptographic hash function) and append the hash (encrypted) to the output message.
  3. Sign the encrypted value of the message, and append that hash to the output message.

A visualization might make it easier to understand. If you want to read more, there is a great presentation here.

image

The first two options are bad. Using those methods will leak your data in various ways. There is something that is apparently called the Cryptographic Doom principle, which is very relevant here. The idea is simple, we don’t trust the encrypted value, it may have been modified by an adversary. The first two options that we have require us to first take an action (decrypting the data) before we authenticate the message. We can then use various tricks to rip apart the whole scheme. That means that the very first thing we do is verify that the encrypted text we were handled was indeed signed by a trusted party (that has the secret key).

If you’ll look closely at the image above, you can see that I’m using two keys here, instead of one: key1 and key2. What is that about?

In cryptography, there is a strong reluctance to reuse the same key in different contexts. The issue is that if we use a single key in multiple scenarios (such as encryption and authentication), a weakness in one of them can be exploited in the other. Remember, cryptography is just math, and the fear is that given two values that were computed with the same key, but using different algorithms, you can do something with that. That has led to practical attacks in the past, so the general practice is to avoid reusing keys. The good thing is that given a single cryptographic key, it is easy to generate a new key using a key derivation function.

I’m still going to limit myself to HMAC-Md5 only (remember, none of this code is meant to actually be used), so I can derive a new key from an existing one using the following mechanism:

The idea is that we use the HMAC and the static domain string we get to generate the new key. In this case, we actually use it twice, with the nonce being used to inject even more entropy into the mix. Since HMAC-Md5 outputs 16 bytes and I need a 32 bytes key, I’m doing that twice, with a different counter value each time. I also rearrange the order of the (nonce, domain) and (domain, nonce) fields on each hashing attempt to make it more interesting. 

A reminder: I didn’t spend any time trying to figure out what kind of security this sort of system brings. It looks very much like what Sodium does for key derivation, but I wouldn’t trust it with anything.

With that in place, here is the new code for encryption:

We have a lot going on here. In the initWithNonce() function, we generate the derived keys for two domains. Then we generate the block of key stream, as we previously did. The last stage in the initWithNonce() function is initializing the MAC computation. Note that in addition to using a derived key for the MAC, I’m also adding the nonce as the first thing that we’ll hash. That should have no effect on security, but it ties the output hash even closer to this specific encryption.

In the xorWithKeyStream() function, you’ll note that I’m now passing both an input and an output buffer, aside from that, this is exactly the same as before (with the actual key stream generation moved to genKeyStreamBlock()). Things get interesting in the encrypytBlock() function. There we XOR the value that we encrypt with the key stream and push that to the output. We also add the encrypted value to the MAC that we generate.

The idea with encryptBlock() is to allow you to build an encrypted message in a piecemeal fashion. Once you are done with the data you want to encrypt, you need to call to finalize(). That would copy the nonce and complete the computation of the MAC of the encrypted portion.

The encrypt() function is provided in order to make it easier to work with the data when you want to encrypt a single buffer. (And yes, I’m not doing any explicit bounds check here, relying on Zig’s to panic if we need to. I mentioned that this isn’t production level code already?)

For encryption, we can pass either a single buffer to encrypt or we can pass it in pieces. For decryption, on the other hand, the situation isn’t as simple. To decrypt the data properly, we first need to verify that it wasn’t modified. That means that to decrypt the data, we need all of it. The API reflects this behavior:

The decrypt() function does do some checks. We are dealing here with input that is expected to be malicious. As such, the first thing that we do is to extract the MAC and the nonce from the cipher text buffer. I decided it would be simpler to require that as a single buffer (although, as you can imagine, it would be very simple to change the API to have that as independent values).

Once we have the nonce, we can initialize the struct with the key and nonce (which will also derive the keys and setup the macGen properly). The next step is to compute the hash over the encrypted text and verify that it matches our expectation.

Yes, I’m using eql() here for the comparison. This is a short circuiting operation, and I’m doing that intentionally so I can talk about this in a future post.

If the MAC that I compute is a match to the MAC that was provided, we know that the message hasn’t been tampered with. At that point we can simply XOR the encrypted text with the key stream to get the original value back.

A single bit out of line with this model will ensure that the decryption fails. What is more, note that we don’t do anything with the decryption until we validate the provided MAC and cipher text. To do anything else would invite cryptographic doom, so it is nice that we were able to avoid it.

In the next post, I’m going to cover timings attacks.

time to read 3 min | 526 words

In the previous post, we managed to get to a fairly complete state, the full code is less than 50 lines of code, but has enough functionality to be able to make use of it.

Don’t actually do that. This code is horribly broken, and the adage of “don’t implement your own encryption” holds very strongly here.

Let’s consider a fairly typical encryption setup. You log in with me, I generate an encrypted cookie and hand it back to you. In all future interactions, you give me back the cookie. I can decrypt that and make decisions based on that.

For example, let’s assume that we want to send the user the following:

We compute the user name and role, pass it to the genCookie() function and have an encrypted string to work with. In this case, here is the cookie in question:

8E92B0E4AE4BE6BEFEF2638D02416E61,6763169603E0BFA8A6BC6B2C768EABAA930E15CB7D11901C9E932ED0DBD8

To go the other way, we decrypt the cookie using the nonce and the key, then parse the JSON into the cookie struct, like so:

With this in place, we can start making use of this foundation. Here is some user level code:

At this point, this is awesome. We know that the cookie we got was encrypted with my key (which the user doesn’t have, obviously). So I handed an encrypted blob to the user, got it back and now I can make decisions based on this.

Or can I? A proper crypto system is defined as one where everything is known, aside from the secret key, and it maintains all its properties. The plain text of the cookie is:

{"role":"users","name":"oren"}

But I don’t have that. However… can I play with this? Remember the last post, we saw that XOR on the encrypted text can give us a lot of insight. What about in this case? Here is what I know:

  • The role value starts at position 9 and lasts 5 characters.
  • The value it currently has is “users”, we would like it to be “admin”.

Since we got the encrypted text from the server, we can return something else at a later point. Can we take advantage of that? Well, XOR is cumulative, right? So we can do this:

Let’s see what we’ll get when we run this on this cookie:

8E92B0E4AE4BE6BEFEF2638D02416E61,6763169603E0BFA8A6BC6B2C768EABAA930E15CB7D11901C9E932ED0DBD8

And the output would be:

8E92B0E4AE4BE6BEFEF2638D02416E61,6763169603E0BFA8A6A87C246D93ABAA930E15CB7D11901C9E932ED0DBD8

Now, if I send this to the server, it will properly decrypt this into:

{"role":"admin","name":"oren"}

And we are off to the races with full administrator privileges.

This is not a theoretical issue, it has been exploited in the past, to a devastating effect.

The question is, how do we prevent that? The key issue is that encryption (for stream ciphers) is basically just XORing with a secret key stream. Even for block ciphers, the encrypted data is malleable. We can’t assume that it will not decrypt properly, or that the decryption of the modified encrypted text wouldn’t result in valid plain text.

Luckily, cryptography also has an answer, we can create a signature of the data (with the secret key) and then use that to verify that the data hasn’t been tampered with. In fact, we are already using HMAC, which is meant for this exact purpose. This is a pretty big topic, so I’ll discuss that in the next post.

time to read 3 min | 554 words

In the previous post, I moved to using HMAC for the key stream generation. Together with a random nonce, we ensure that each time that we encrypt a value, we’ll get a different encrypted value. However, I want to stop for a while and talk about what will happen if we reuse a nonce.

The short answer, a single nonce reuse results in catastrophic results for our security scheme. Let’s take a look at how that works, shall we?

We are going to use:

  • Key: jMnNRO9K7DEGmrhPS6awT3w4AAjCMgaNNqPSiwTL//s=
  • Nonce: 3ilsaRYYOls4SA6XHd70jA==

Here is the encryption results:

attack at dawn! 414CE53F71D47A36FF099792858F58
defend at dusk! 445DF73B7CDB7A36FF099786818A58

Do you notice anything? At this point, we can see that we have some similarities between the texts, which is interesting. If we XOR those two texts, what will we get?

Well, if you think about it, we have:

“attack at dawn!” XOR keystream = 414CE53F71D47A36FF099792858F58
”defend at dusk!” XOR keystream = 445DF73B7CDB7A36FF099786818A58

The XOR operation is cumulative, so if we XOR those two values, we have:

445DF73B7CDB7A36FF099786818A58 XOR 445DF73B7CDB7A36FF099786818A58 =
    ( “attack at dawn!” XOR keystream ) XOR (”defend at dusk!” XOR keystream) =
    “attack at dawn!” XOR ”defend at dusk!” =
   051112040D0F000000000014040500

Sorry for the math here, but the basic idea should be clear. Note that in this case, we don’t know either of those messages, but what we have been able to do is to get the XOR of the plain texts. At this point, an actual cryptographer will go look at frequency tables for symbols in English and start making guesses about those values. I’m certain that there are better techniques for that, but given the computing power that I have, I decided to just break it using the following code:

When running this code on the XOR of the encrypted texts (which is the equivalent of the XOR of the plain texts), we get the following outputs:

info: xor: 051112040D0F000000000014040500
info: word: defends, match: attacks
info: word: attacker's, match: defender's
info: word: attacker, match: defender
info: word: defenders, match: attackers
info: word: adapt, match: dusty
info: word: attack, match: defend
info: word: defending, match: attacking
info: word: attacks, match: defends
info: word: dusty, match: adapt
info: word: defender's, match: attacker's
info: word: defender, match: attacker
info: word: defend, match: attack
info: word: attackers, match: defenders
info: word: attacking, match: defending
info: word: attacked, match: defended
info: word: defended, match: attacked

As you can see, we have a bunch of options for the plain text. Removing redundancies, we have the following pairs:

attack defend
adapt dusty

That was easy, I have to say. Now I can move on to the next word, eventually cracking the whole message. For the values that are the same, I’ll get zero bytes, of course. At that point, I can simply try guessing based on context, but the process shouldn’t take long at all.

The question is how to solve this? Currently, the commonly accepted practice is to shout at you very loudly in code reviews and to put big notices in the documentation: Thou shall not reuse a nonce.

There is something called SIV mode, which aims to help in this, but I want to keep that for a future post.

time to read 3 min | 536 words

I’m trying to not get too deep into the theory of encryption. I’m happy to say that so far I was able to avoid any math whatsoever and hopefully this is an interesting series. I do have to touch on an important topic.

I’m using MD5 here for the purpose of generating a random bitstream to be used as a stream cypher. In the previous post, we looked into a key issue. If we don’t do things properly, we can easily get to the point where a single 16 bytes block that we guess can allow us to decrypt the entire message.

I “solved” that in the previous post by adding the key back again into the MD5 computation. That works, but it isn’t ideal. There are all sorts of subtle issues that you have to take into account (length extensions, for example) and probably other stuff that I’m not even aware of. There is the HMAC family of functions. That is a keyed hash function that has far stronger security properties. Wikipedia does a great job explaining it. Note that there is a cost, HMAC is more expensive than the underlying hash function it uses.

The best practical explanation, by the way, I found here. Our previous method of adding a key to the mix was to concatenate it in front of the message. The problem is that if md5(msgA) == md5(msgB), then md5(key || msgA) == md5(key || msgB). And that isn’t something that we want. I (personally) can’t think of a way to abuse that property to get something nasty going on with the way we use it in this encryption algorithm, but I’m very much not an expert. The HMAC model, on the other hand, would use: md5( key1 || md5(key2 || msgA)). And there is no way to get that to collide, even if the messages generate collisions for MD5.

Let’s see what we need to do to switch to HMAC-MD5 instead of MD5. Here is what the code now looks like:

There are a few things that are worth noting here. We changed the size of the key (since HMAC-MD5 uses 32 bytes, not 16 bytes). Again, I have no comment on the actual security of such a scheme, mostly because I wouldn’t know where to even begin doing this analysis.

We are also no longer using the previous block as the input to the next block. Instead, we use a counter mode, where we hash the nonce and an ever incrementing counter using the provided key. That gives us some additional safety from the previously seen issue where we could figure out what the rest of the key stream would be.

Of course, better than before doesn’t mean that this is actually good. There are several other problems that I (as a non cryptographer) still need to fix, and probably more that I’m not seeing.

Another aspect of using this sort of construction is the additional cost that is involved. The HMAC computation isn’t that much more expensive. Looking at some benchmarks, this is about 3% slower, which is quite reasonable.

Next, we are going to see how we can abuse the malleability of this encryption system for fun and nefarious people’s profit.

time to read 3 min | 461 words

In the previous post, I showed how the lack of nonce means that encrypted similar values will expose their content. Now I want to discuss a different matter, let’s assume that I have some control or knowledge about the plain text of an encrypted message. That is easy enough to obtain, I can simply ask you to encrypt a value for me. A great scenario for that may be when you are sending data based on something that I do. Let’s assume that I get you to include the following plain text in your message: “red tanks are over the big hills”.

I am then able to intercept your message, which looks like this:

FE485CEECED5BA4CCA281D1F586E67233D9
24652E5BD690357F6E29C1C36DC446001DD
DF16536DB427337089D27A9C6FCCED553FA
4982E58F8B7B5FDD02A11C0A1C08E93FA2C
29582A15DC34CFCFB61AB2975CC0F4D29F9
C6715D0F9E2CE661C816E047590389A9064B
A5F3E3D8461D59B7C3407A76F248A71

This was encrypted with the nonce: DE296C6916183A5B38480E971DDEF48C (remember, the nonce itself is public and has no intrinsic meaning), but I don’t actually need the nonce in this case!

Now, here is what I can do. I know that the message is bigger than 16 bytes, so I can XOR parts of the encrypted message with the known plain text. If I do this properly, I then get the key stream. Since the algorithm in question is using the key stream to compute the data directly, I can now just decrypt everything.

To give some context, here is the full code that I need to decrypt this message:

I’m scanning through the encrypted text, at 16 bytes intervals (since that is the block size of our encryption routine) and try to XOR that value with the relevant matches from the known text. That gives me the key stream, which I then use to decrypt the encrypted text from that point (and compute the rest of the key stream for future values).

This code will output the following decrypted text:

info: decrypted: the big hills
  options: cower in fear, storm the castle, play again?
  action plan: zulu-9

And the full message that I encrytped was:

enemy said: red tanks are over the big hills
options: cower in fear, storm the castle, play again?
action plan: zulu-9

The problem was that by XORing the known plain text with the encrypted text, we exposed the key stream, which we also use to compute the next part of the keystream. At this point, I’m entirely exposed.

In order to fix that, we need to add something secret back to the mix. The secret key is the obvious answer, and here is the code fix for this issue:

That would fix this problem. Even if we tried this again, we’ll get a part of the key stream, but we won’t be able to compute the next block of the encrypted values, since we need the key for that.

And yes, I know about HMAC, I’m planning to discuss that in the next post.

FUTURE POSTS

  1. RavenDB Performance: 15% improvement in one line - 15 hours from now

There are posts all the way to Dec 02, 2024

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}