Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

[email protected] +972 52-548-6969

Posts: 7,527
|
Comments: 51,163
Privacy Policy · Terms
filter by tags archive
time to read 3 min | 505 words

When a candidate sends a CV and includes a GitHub profiler, that almost always guarantees that I’ll give that profile a look. The most interesting thing from my perspective in a GitHub profile is that it allows me to look at the candidate’s work. There aren’t that many candidates with GitHub profile links, and not having a link isn’t something that will cause me to rule out a candidate. But I thought it would be interesting to share some of my finding from such trawling of repositories.

Here is an example of something that I don’t like:

image

In fact, in most code bases, I’ll skim very quickly to find the data access code. SQL Injection is a pet peeve of mine, and seeing how a candidate’s code handle user’s input is an easy way to get a first impression. It isn’t always indicative of “this person has no skills and is careless”, mind. But I found that it is a good place to start. Especially because mostly I’ll see sample projects and half finished stuff. So seeing how they treat this particular issue (which is easily found and should be familiar to most developers) is a good quick check. Then again, here is the same candidate, with another repository:

image

This is using Hibernate, by the way. And that kind of hurt my feelings, to be fair.

On the other hand, a different candidate:

image

That is a much better, and show that they pay attention to other functional requirements.

In general, I consider the presence of a GItHub link in a CV as an invitation to evaluate the candidate’s work and will do so with the goal of understanding their approach, the quality of their code and their skills. As such, if you include a GitHub link in a CV, I would recommend consider this to be your public face and a criterion for evaluation.

This is an advantage. It means that the GitHub link mere existence make you pop out of the crowd. On the other hand, it also means that your code is under scrutiny.

I’m advising here for people starting out, without much background. As such, having a straightforward way to be evaluated on your skills is a plus. I would suggest making it easier. For example, a clear README is nice, especially if you explain what you were trying to do. “Playing around with Angular to see how it feels” is a great thing to have, because it gives context to the person reading your code. Especially for web applications and client side work, having a visible demo that I can quickly look at is great.

On the other hand, having well known bad practices (such as SQL Injection, plain text passwords, etc) in the code is a big negative.

time to read 1 min | 108 words

I just got a CV from a candidate looking for a junior position. I looked at the CV (and oh my God, did this guy have a lot of acronyms in there). I noted that he has a GitHub account in the CV, so naturally I checked it.

There is a single repository there, which I’ll present to you in all its glory:

image

This is actually a negative. If he didn’t have a GitHub account, I wouldn’t have minded. But including one that is in this shape is not a good idea.

time to read 4 min | 666 words

imagePart of the job of a product owner is to pay attention to the list of issues in the issue tracker. Not just to get a feeling for the cadence of the project, but to have an impact on its direction.

Paying attention to the issues doesn’t mean just tracking down what bugs are still opened, mind. Consider the case of a product owner with the release due date looming over the horizon, you need to start looking at the list of remaining issues and take active steps to make sure that you are going to get done more or less on time.

The usual rules apply, chose any 2 of:

  • Speed
  • Quantity
  • Quality

In other words, your team can deliver more features in time, if you are willing to sacrifice quality. On the other hand, they can keep high quality and the same number of features, but the due date will have to move.

As an aside, it is possible to get all three of these aspects at once, but only for a very short amount of time (few days to a week or two at most), but at a very high long term cost.

One of the things that I observed is that in some cases, a lot of complexity and work is in the last 2% of work, where all the the polish work and rough edge cases lurks. In some respects, this is actually a really good thing. Because it gives the product owner the chance to remove features that won’t usually have an explicit impact on the users.  A good example for this in RavenDB would be the amount of time and effort we put into the intellisense feature of RQL queries in the studio. That falls under the Nice To Have set of features. It is unlikely that we’ll get many upset users if the intellisense isn’t up to part with something like Visual Studio or ReSharper, so beyond getting some basic functionality right, we can defer improvements there if we don’t have the extra capacity to complete this by the expected date.

I’m sure that you can think of other examples in your own projects. Note that this require you to understand what exactly your users are valuing your software for. In the case of RavenDB, adding more query functionality and speeding up overall system performance ranks much higher than adding extra smarts to intellisense that is mostly used during exploration / demos.

On the other hand, the effect of the pushing such features down the road accumulate over time. In other words, if you keep your priorities straight and select which features should go into the product, you will defer the small fries over and over. At some point, you’ll need to make a decision about them. You can either decide that they don’t make sense anymore or they are never really going to be important enough to actually put in the “let’s get this done” queue.

Alternatively, you might want to put them in the idle bin. In other words, whenever you have an idle portion in your development, you can peek into the idle bin and get some tasks from there. That is also a good place to have a new team member start from. These are tasks that are minor and not that important, after all, so they can use that to learn the codebase. In fact, we have used this in the past as the tasks bin for interns. That is usually a really good fit, for the same reason that they are good tasks for a new team member with the added benefits that they are usually well scoped and if the intern messes up, you didn’t lose too much.

Regardless, the idle bin notion is important, because otherwise your future tasks queue is going to grow larger and larger, and it will be ever harder to figure out what tasks actually matter.

time to read 4 min | 610 words

imageImagine that you are the owner of Gary’s Shoes, and that you want to get data from all of your multitudes of stores into a centralized location. You’ll use that data to make decisions, predict future trends, etc. Given that each store must operate independently, you have a server in each location that will push up it changes (and get updates from) the HQ cluster. You can see an example of this kind of setup in this post.

This work quite well, but it does require the user to be aware of a potential issue. When you have a massively distributed data flow process setup, you need to also pay attention for the quite in the noise. What do I mean by that?

One of our customers have RavenDB deployed to tens of thousands of locations worldwide. At any given time, you are going to have at least some of those locations unavailable. In some locations, part of closing down for the day means literally flipping the master switch on electricity for the entire building. On others, you might have someone tripping over the router or have some local or regional network outage.

Part of the strategy for dealing with such a data set, coming from so many separate locations, is the need to monitor when we aren’t getting data. The fact that on most of our locations we have near real time data is very powerful for the business. But you also need to see where you aren’t getting the data from and setup proper alerts and monitoring for the missing data. From a business perspective, it is also advisable to surface that kind of detail all the way to the user. If you are going to be ordering inventory for the stores in a particular state, but the two major stores in the area are down because of a network issue and has been down for two days now, you want to be aware of that and figure out that you are working on out of date data.

To be honest, the issues isn’t so much about two days of lag in the case of once in blue moon type of error. In the scenario outlined above, in pretty much all business scenarios that I can think of, you won’t really see any impact on the decision making of the organization.

The killer is when you have some sort of a problem that goes on for a while. A DNS update that was missed because of bad DNS cache policy, for example. Now your updates to HQ go into the void in a consistent basis. On the other hand, everything else continue to function properly both locally and for HQ. If this isn’t accounted for, it is easy to miss this for a long period of time. I have seen such a case that was only discovered when the year’s end numbers didn’t quite match up what they were supposed to. Given that this was the second year in a row this happened, the investigation found that some network issue indeed cause a very long term topology failure. This was actually properly reported, in a log file that no one ever read.

Lesson learned, make sure that part of your data flow strategy accounts for such things and bring them to the users’ attention. Actually resolving the issue was a network configuration change that took minutes and the entire dataset was synchronized within a few hours afterward. But finding out that there was even a problem took effectively forever.

time to read 3 min | 558 words

imageWe designed RavenDB to be a server side database, to be used to run large scale business applications. Surprisingly for us, there is a large group of users that have taken RavenDB and actually run it as part of their deployed systems. In other words, instead of having a single large RavenDB cluster they will typically deploy many (hundreds in the small cases, tens of thousands to millions in the large cases) of RavenDB instances across a wide variety of locations.

Part of that is the fact that RavenDB can be embedded inside an application quite easily. That means that we don’t need complex setup or administration. You can just use RavenDB from your application and everything Will Just Work. Another factor is the fact that you can run RavenDB on very low end machines, including 32 bits machines, ARM SoC, etc.

One use case was a point of sales system that had to spec out their hardware a decade in advanced and had to deal with existing installations that were still running hardware from 10 years ago (with little desire to upgrade). Another use case was deploying RavenDB as part of an industrial robot package, with RavenDB installed on a 32 bits ARM system on chip that control the robot.

That kind of deployment pattern lead to interesting requests. For example, several of our customers need ad hoc replication in a location. So all the nodes in a particular physical location will join together to a full mesh of replicated nodes. This gives us high availability in a particular location with any node in the network being able to service any request across the entire location. Boot up a new machine, wait a bit for the rest of the network to update it and you are good to go. This also helps when you consider your machines to be unreliable (because they are old, beaten down and generally minimally maintained).

Another scenario with the need for dynamic topologies is the deployment of RavenDB as set of independent nodes that need to report to some sort of head quarters. This is easy to do by defining external replication or ETL on the node and have it send all the relevant data to a central location for processing. This way, you get a cheap “always available” local node but can still have a global view of your data. I posted about something similar in the past, if you care for the details.

We are now looking for additional features to serve this kind of deployment. In particular, we are interested in making it easy to share data and generate analytics across widely distributed and separated set of instances. One of things that we are currently considering is some form of integration with the cloud. For example, consider Amazon Athena, which allow you to run analytics queries on files residing in S3. We can define ETL processes that would upload the data from RavenDB as it is changed on each individual node. This way, you have each node pushing data to the cloud and a central location that can run live analytics on the data.

What are your thoughts on this? And what other features do you think will serve this kind of scenario?

time to read 18 min | 3409 words

imageThis post is the text version of a presentation I gave a few weeks ago. There is in reference of this classic post by Joel.

In 2015, I decided that we needed to reboot RavenDB. I did that with the full understanding that this is going to be a huge task, including knowing that it will be bigger than what I can project, even if I take this line of thinking into account.

RavenDB 1.0 was written a decade ago. It was written because it didn’t leave me alone and I wanted to get it out of my head. At the time, I was focused more on getting it out the door (and my head) and was taking shortcuts in the implementation. That allowed me to cut down dramatically on the amount of work that is involved in it. At the same time, this put some constraints on the implementation and architecture. The most obvious one was the reliance on Esent, which tied us to Windows. C# as the implementation language, to a lesser extent, also had the same issue until .NET Core. (Yes, I’m aware of Mono, I have no idea how people managed to run anything beyond hello world on it. We tried porting RavenDB to Mono multiple times, and I still bear the scars.)

I went back and looked at our release notes, in literally every major release, we have spent significant amount of time and effort on “performance optimizations”. In January of 2015 we had a few sprints that were dedicated to just this issue. We went down to assembly code in some cases, analyzed our hotspots and optimize things in a very serious manner. We got some amazing performance improvements in some cases, reducing the runtime by orders of magnitude in some cases. But it still felt like we were hitting a limit. What is more, experience from customers in production showed us that there were a number of cases where we run into problematic behavior. This mostly happened on large / complex projects. And nearly all those issues were related in one way or another to memory and the GC.

Our indexing, for example, would be reading data from disk into memory. That was meant to save disk I/O during indexing, and including pretty smart prefetching and monitoring behavior. It also had the side effect of loading documents (which can be large) into managed memory and holding on to them long enough to push them into Gen1 and Gen2. Then they would be indexed and need to go away. But given that they were pushed to a higher generation… that meant more expensive collection cycle.

RavenDB was created before the pervasive use of fast disks, and it turns out that in some cases, reading the data from disk was actually faster than parsing it using JSON.Net. In other words, our “I/O bound” process of reading documents was actually dominated by the time it took to parse the JSON text. That does not include the costs of actually cleaning up this memory. Complex JSON documents can have a lot of objects,  and the cost of GC rise with the number of objects that are being tracked. There were pretty fundamental problems, which I didn’t think we could fix in a piecemeal fashion.

That time also coincided with a peak in the number of support incidents that we got. Unlike many other open source projects, we treat support as a cost center, not a revenue center. In other words, we don’t want to have more support, that isn’t how we want to make money. Being a database, we were frequently at the heart of things and our customers and users are very sensitive to any issue that might arise. I’m painting somewhat of a bleak picture, I’m aware. It wasn’t nearly that bad from the point of view of any particular customer. But on aggregate, from our point of view, it felt like a nasty game of whack a mole. As soon as we provided a solution to one customer’s issue, another would pop up, somewhat related but just different enough to not be fixed by the previous change. These weren’t regressions, mind. These were just a lot of places where the changing times violated some of our core assumptions.

Toward the end of 2015, I sat down and really thought about what we needed and were missing. This was the situation as I saw it.

image

There was also the issue that we have learned a lot over the years. We built Voron (our storage engine) from the ground up, we had a lot of experience running in production and we knew what kind of tasks our customers were using us for. I kept thinking that I wished I had a time machine and could do things over properly. Given that my time machine is still in the shop, I decided that we had two options:

  1. Minor fixes along the way – slowly improving our behavior as we stride toward the desired architecture and usage.
  2. Break it all – essentially start from scratch, with a new architecture and write it the way we want it to be written.

The obvious choice was to do this slowly. The problem was that I really couldn’t think of a good way to actually achieve that. The kind of changes we wanted to make started from replacing the most fundamental structure we had, how we represent JSON in our document database and got more complex from there. We wanted to change how we store data on disk, how we index data, how we … literally every single feature that we had was going to be transformed in some way.

We also had additional issues. The Windows only limitation was really hurting us and we really wanted to get a good Linux story going. The support burden was also at the very top of my mind as we considered what to do. In the end, we came up with the following decisions:

  • We don’t require backward compatibility. Either on the server side or client side.
    • That was the hardest decision, but it meant that we could actually tackle some of the biggest issues freely and without constraint.
    • That meant that we wanted to keep the same feeling, but be able to make changes to corners of the API that atrophied.
  • Support cost and simplified operations as a primary concern.
    • This meant that, at the design level, we took into account debugging considerations.
  • Order of magnitude performance improvement across the board.
    • Otherwise, it isn’t worth the effort.
  • Cross platform from the get go.

That was in Sep 2015. I sat down and wrote a design document that outlined the new architectural approach, spiked a few things and then we were off to the races. I blogged all about the process extensively, so I’m not going to repeat that.

We decided to use DNX (which became .NET Core) at a very early stage. Initially, I don’t believe that we even had a debugger, and most of our builds had to be trigger from the command line. I guess that if you are going to make a risky decision, you might as well make a few others…

I’ll say that I made a lot of preparation to fail up front. Part of the reason we went with DNX was that we knew that worst case scenario, we could spend a few days and get it working on the full .NET framework if we had to. I took this step with a lot of backward glances to make sure that we won’t get lost.

Alongside our experience in supporting RavenDB, we also run a UX study and combed all the incident reports we generate from support calls. The idea was to take as much time as necessary to get things as right as we could handle it. The studio change between 3.5 and 4.0 is massive, and was driven by getting a talented professional to design each part of the UI, guided by real world UX study and analysis. We kept asking “where do it hurt?” and whenever we had found a cause of pain we worked to alleviate it.

Some of our guiding principals during that phase of the project were:

  • Cross platform from the get go.
    • We couldn’t afford to port it midway through. Too complex and prune to failure.
  • OWN the stack.
    • We don’t want to use any components that we don’t have good visibility into and the ability to work  with.
    • In particular, anything that is a core competence should be owned and built by us. For our scenario, that means primarily the storage engine.
  • Build for performance.
    • I wasn’t kidding about requiring x10 performance improvement. We had one or two devs at all time running benchmarks and fixing things performance of every completed feature.
  • Build for operations.
    • Each and every design decision should be considered in light of its operational behavior.
    • In particular, we excised any feature that relied on hard to figure out technology or integration (I’m looking at you, Windows Auth).
    • This included changing the design of the software so a core dump would make it easier to figure out what is going on. We also explicitly opened up a lot of the internal behavior as debug endpoints and plug them to the studio so operators will have greater visibility. As an aside, that was very helpful in figuring out our performance bottlenecks and we worked to improve that part of the project as we strived for ever faster performance.
  • Reducing the support burden as an major goal.
    • A lot of the previous points tie into this. But this is also where we combed over any issue that had a “user misconfigured / misused” and built in alerts directly into RavenDB to give the user early warnings about common issues.
  • We defined a set of common scenarios. Reading / writing documents, for example, and then we spent months on designing the whole system so it will work to make these fast, seamless and easy.

A good example of that is how we stored documents in RavenDB now. We have our own binary format that allow us to avoid parsing the document when reading from disk, plays nicely with memory mapped files (which is how Voron, our storage engine, works) and can effectively allow us to hand a pointer to a memory mapped buffer and start working with that as a JSON document without:

  • Allocating any managed memory
  • Parsing JSON
  • Require caching / pre-fetching, etc.

We spent a lot of time thinking about what we want to do, and then we looked into how the operating system expect us to behave. The idea is that if we play to the operating system’s expectation, we can reap a lot of benefits from the OS’ own behavior. This is how RavenDB handles loading data to memory. We let the operating system handle it and just make sure that our own behavior is both predictable and applicable to the OS’ optimizations.

I mentioned that GC was the bane of our existence, right? We moved a lot of the memory management in RavenDB to unmanaged code and handle that explicitly. That gave us the advantage that we know a lot more about how we should expect to use the memory an can spend the time to make this highly optimized.

At the debugger side of things, we made some changes to the design of RavenDB with the intent to make it easier to debug and analyze core dumps. For example, most of the long running threads are named, so it is easy to figure out who they belong to (and not just what they are currently doing). For that matter, long running tasks are using synchronous mode, specifically because it means that we can drop in the debugger / core dump and look at their state. This is much harder to do with async methods. You might have noticed that I mentioned core dumps a few times, right? These are essential to figuring out what is going on with your software on production systems. We learned a lot about production debugging over the years and with RavenDB 4.0 we took steps to make things easier. For example, many data structured in RavenDB have an extra field called tag that is there specifically to provide debugging information about the value if we are looking at the value in the debugger.

An obvious question for this project was whatever we should still stay on .NET or should we move to an unmanaged language. I considered this seriously, with Rust, C/C++ and Go being the top contenders as the implementation language. I decided to stay with .NET for several reasons. Productivity was right there in the top. We already had a team that was well versed in .NET, and while that isn’t a blocker, it was a consideration. The tooling around .NET are leagues ahead of anything else that I have seen. That include both write time (where Rider / ReSharper rules) and for debug time (I found nothing remotely close to Visual Studio for debugging non trivial code easily). The cross platform angle, which was the most serious issue for us, was resolved with .NET Core.

Rust wasn’t matured enough at the time (2015) and even today I think that a language that prides itself in being hard to learn isn’t a good choice. C++ was a strong contender, but the slow compilation times were an issue. The tooling is similar, but inferior in many respects. Cross platform C++ is possible, and modern C++ is very different from what I remember. However, it come with a very high degree of complexity and would take a lot of time to master again properly. C (distinct from C++) is much simpler language. Still has the compilation speed issues, but the language is much simpler. I think that if it had a defer mechanism builtin it would be a much nicer language. Go was ruled out because if I’m going to be writing everything from scratch, I might as well go all the way down to C’s level and not stop with something that still has GC pauses.

The choice of C# as CoreCLR has been vindicated. The project team and the community at large puts a large emphasis on performance and we keep getting more and better way to handle low level details while still able to use higher level concepts when needed. And the tooling… dear God. I routinely work with other platforms, testing things out, but there is nothing that come close to the toolsets that are available for C#.

An interesting wrinkle with the 4.0 release was that we started it before the 3.5 release was even out. For a while, we had a small team working on the foundations of 4.0 while the rest of us were busy hammering in the last details of RavenDB 3.5.

As soon as we had the bare minimum to go (basically, it compiled and could save a single document and even regurgitate it back up again) we started heavy parallelization of the work. We had a team working on indexes while another was dealing with (even at this early stage) performance and another working on the user interface. At that time frame, we have hired a few more people and could really see the benefits of all of the separate teams working in concrete. One of the priorities of this method was to get to a demoable state. In fact, at some point we had over 30% of the people working on either the UI directly or UI related infrastructure.

One of the things we kept hearing back is that the UI and insight it provided into what is going on inside the database were crucial for our users. It also helped a lot to us as we developed RavenDB to get to play with it directly and see things in an easy manner. The UI has been at once one of the most trivial of changes and the most profound. On the one hand, we didn’t really make any significant architectural changes in the UI. On the other hand, we re-wrote most of it with the aid of UX study and a real professional at the helm. That gave us a lot of visible polish that underscore the amount of work that happened in the engine.

How did all of that turn out?

  • I initially thought it would last a year to 15 months, with an expectant due date of Dec 2016. That was with a team size of about 25 people.  Work started in Sep 2015.
  • As it turns out, RavenDB accumulated a lot of features in the years it spent in production. We had to evaluate each of it, see how it would fit into our architecture and get it ported. That took a lot of time. Especially because it many cases we took the time to change the approach we had for the feature completely.
  • By Mid 2016 I already changed the scheduled to Jun 2017.
  • Close to the end of 2016 we released RavenDB 3.5. This freed up some people to work on the 4.0 release, but also meant we had higher than usual support calls while customers integrated the new release.
  • Actual release of the 4.0 release happened in Feb 2018. So just about 30 months from the start or about double the time I expected it to happen.
  • We had to cut some features out to make the 4.0 release, all of them are back in the 4.1 release, scheduled for next month.
    • This means that to get back to the same place took us 3 years. But we now have a lot of extra features.
    • Most of the missing features were pretty minor, though, and rarely used.

What did all of that gain us?

  • Performance: Single node. Over 100,000 writes / sec and over 1,000,000 reads / sec in our benchmarks.
    • Real world users report performance boost of x20 to x52 time faster.
  • Support call duration dropped from days / weeks to about 2 – 4 hours.
  • Cross platform on Windows, Linux, ARM and MacOSX.
    • We are now deployed to production on Raspberry PIs, because we are the fastest real database on that kind of hardware.

We were over a year overdue, and even with the deadline being extended several times we had to cut some features to actually make the cut for release. The general acceptance of the new release by the community has been a roaring success. We exceeded our own goals for the project, even if we took a lot longer than expected to get there.

Now, for some additional thoughts. We didn’t really re-write the whole thing from scratch. Instead, we had a lot of code that we could at least partially reuse. The storage engine was ported, no re-written, for example. However, we changed architectures in a pretty significant way. For example, the format and manner of working with JSON changed entirely between these two released. We are a JSON document database. As you can imagine, we pretty much had to modify everything as a result of that.

We didn’t designed the whole things from the started. We had a rough outline and we let things roll from there. As a result of the new architecture and expectation, by the time we hit a particular feature we were able to utilize what we already learn about how to work with the new architecture to improve things. We also weren’t afraid of changing things multiple times. Authentication had several major design changes midway through, and it ended up so much simpler than what we had before. Even pretty late in the game, we still made significant changes. The RQL support, having a SQL like querying language, came about on the last 20% of the project.

That was a huge change, and I got a lot of “here comes the crazy train again” feedback. This is probably one of the reasons we delayed by another few months. But it was worth it by far. Basically, because we were able to give up on backward compatibility, we were able to move quickly and change stuff as we wished. We knew that we wouldn’t have another change like that for another decade, so we try to get the big changes done.

In retrospect, I think it worked quite well. I’m really proud of how RavenDB 4.0 turned out.

time to read 3 min | 531 words

imageAn interesting question has popped up in the mailing list about the behavior of RavenDB. When will RavenDB client send the certificate to the server for authentication? SSL handshake typically takes multiple round trips to negotiate an SSL connection, and that a certificate can be a fairly large object. It makes sense that understanding this aspect of RavenDB behavior is going to be important for users.

In the mailing list, I gave the following answer:

RavenDB doesn’t send the certificate on a per request basis, instead, it send the certificate at the start of each connection.

I was asked for a follow up, because I wasn’t clear to the user. This is a problem, I was answering from my perspective, which is quite different from the way that a RavenDB user from the outside will look at things. Therefor, this post, and hopefully a more complete way of explaining how it all works.

RavenDB uses X509 Client Certificates for authentication, using SSL to both authenticate the remote client to the server (and the server to the client, using PKI) and to ensure that the communication between client and server are private. RavenDB utilizes TLS 1.2 for the actual low level wire transfer protocol. Given that .NET Core doesn’t yet implement TLS 1.3 or FastOpen, that means that we need to do the full negotiation on each connection.

Now, what exactly is a connection in this regard? It this going to be every call to OpenSession? The answer is emphatically not. RavenDB is managing a connection pool internally (actually, we are relying on the HttpClient’s pool to do that). This means that we are only ever going to have as many TCP connections to the server as you had concurrent requests. A session will effectively borrow a connection from the pool whenever it needs to talk to the server.

The connections in the pool are going to be re-used, potentially for a long time. This allow us to alleviate the cost of actually doing the TCP & SSL handshake and amortize it over many requests. This also means that the entire cost of authentication isn’t paid on a per request basis, but per connection. What actually happens is that on the beginning of the connection, the RavenDB server will validate the client certificate and remember what permissions are granted to it. Any and all requests on this connection can then just used the cached permissions for the lifetime of the connection. This stateful approach reduce the overall cost of authentication because we don’t need to run full validation on every request.

This also means that OpenSession, for example, is basically free. All it does is allocate a bunch of dictionaries and some other data structures for the session. There is no wire traffic because the session is created, only when you actually make a request to the server (Load, Query, SaveChanges, etc). Most of the time, we don’t need to create a new connection for that, but can use a pre-existing one from the pool. The entire system was explicitly designed to take advantage of best practices to optimize your overall performance.

time to read 2 min | 333 words

imageWe are nearly done with RavenDB 4.1. There are currently a few minor stuff that we are still handling, but we are gearing up to push this to our production systems as part of our usual test matrix. Naturally, this means that we are already thinking about what we should do next.

There is a whole bunch of big ticket items that we want to look at, but the most important of which is the one that is likely to garner very little attention from the outside. We are going to take advantage of the new Span<T> API throughout the product. This is something that I really want to get to, since we have a lot of places where we touch native memory, memory mapped sections and in general pay a lot of attention to manual memory management. There are several cases where we had to copy data from unmanaged memory to managed memory just to make some API happy (I’m looking at you, Stream).

With the Span<T> API, that is no longer required, which means that we can usually just hand a pointer to the network that is mapped directly to a file and reduce the amount of work we need to do significantly.  We are also going to also go over the codebase and see where else we can take advantage of this behavior. For example, moving our code to the System.IO.Pipes opens up some really interesting scenarios for simplifications of code and reducing of overhead.

We are going to apply lessons learned about how we actually manage memory and apply them as part of that, so just calling it Span<T> is a bit misleading. The underlying reasoning is that we want to get to simplify both I/O and memory management, which are very closely tied together. This shouldn’t actually matter to users, except that the intent is to improve performance once again.

time to read 4 min | 738 words

imageWhen looking for candidates, there is an ideal candidate. It is the ability to take one of the people already working for you, with all the domain knowledge and expertise and clone them. Hopefully multiple times. If you do this right, you can probably stick the clones in a basement with a bunch of computers, slide Pizza under the door every so often and get a lot of work done for the price of Pizza.

While this (dystopian) scenario is quite nice in terms of overall effort, I do believe that there are some issues with it. Naturally the biggest hurdle is the medical bills for cloning people, there are also some noise about this being inhumane. The real issue, of course, is the lack of feasible technology to accelerate the growth of the clones. I’m sure this will be solved at some point. My time machine comes back from the shop on Monday (and isn’t that ironic), so I’ll be investigating this further at that point.

Setting the clone wars option aside, there is the need to get new hires. And there are several ways to do go about that. You can try getting people with some or all the skills that you require. Or you can get someone that is a blank slate and train them internally. This post is about the later option.

The question is really what do you actually define as a blank slate. For example, hiring my 3 years old daughter as a software developer would be really nice. She is a blank slate, but given that we are currently teaching her to count to 20, I think that this might be premature.

To be perfectly honest, the amount of knowledge that is required to be an efficient developer is staggering. If I was to start the clock from scratch, I think that I would be sitting there twiddling my thumbs to this day, scared of all the things that I must understand to be effective. In some way, not knowing how much I don’t know was really helpful. It allowed me to go out and learn without being overwhelmed. If I looked at just C# and compare the language from 1.0 to 7.3, for example. Each change made sense at the time, and incrementally added to the language. Some of them were bigger than others (generics, linq) but they came in byte size chunks (typo intended). Trying to grok it all at once… much harder.

We actually hire fairly often directly from college. Either immediately after completing the degree or even beforehand. We usually look for people that have gone beyond the rote learning for the good grade but are actually able to understand why this are happening, not just what API to call. Our most junior hire ever had just finished high school and had a few months free before going to the army, effectively being an intern in the company for a short while.

The approach we take for onboarding a new employee (with no practical experience) and an intern is quite different. For a full time employee, my priority is to get them well situated and familiar with how we work and the overall codebase. That means that the typical first assignments will be things that are on the sidelines. Things that are okay if they take a little longer, since they are used to get the new developer familiar with the landscape of the code. Examples include writing new clients, building internal applications using RavenDB, becnhmarking work and building diagnostics and debug tools for production analysis.

For an intern, however, the situation is different. Given that I’m only going to have the intern for a few short months,  spending 2 – 3 months training to the expected level of a full time employee is going to be a waste. Instead, we try to give the intern experimental and research projects. Things that we wished we could have done if we had the time, but typically do not. Some of them are pretty complex, but the key “feature” in this regard is that they are possible to approach without having to have a deep understanding of RavenDB.  For example, SQL Migration, one of the main features of RavenDB 4.1, was actually initially developed by an intern.

time to read 2 min | 377 words

imageWe talked to a candidate recently with a CV that included topics such as Assembly, SQL and JavaScript.  The list of skills was quite eclectic and we called the candidate to hear more about them.

The candidate completed a two years degree focused on the foundations of development, but it looked like whoever designed it was looking primarily to get a good foundation more than anything else. In other words, the end result is someone that can write SQL queries, but never built a data driven application, who knows (about? I’m not really clear at what level that was) assembly, but never written a real application. It doesn’t sound bad, I know, but it was like moving into a new house just after the contractor is done with the foundation. Sure, that is a really important part, but you don’t even have walls yet.

In 1999, I did a year long course that was focused on teaching me C and C++. I credit this course for much of my understanding of the basics of programming and how computers actually work. It has been an eye opening experience. I wouldn’t hire my 1999’s self, as I recall, that guy (can I deny knowing him?) wrote the following masterpieces:

  • sparse_matrix<T> in C++ templates that used five (5!) levels of pointer indirection!
  • The original single page application. I wrote an entire BBS system using a a single .VBS script that used three levels of recursive switch statements and included inline HTML, JS and VB code!

These are horrible things to inflict on an innocent computer, but that got me started in actually working on software and understanding things beyond the basics of syntax and action. I usually take the other side, that people are focused far too much on the high level stuff and do not pay attention to what is actually going on under the hood. This was an interesting reversal, because the candidate was the opposite. They had some knowledge about the basics, but nothing to build upon that yet.

And until you actually build upon the foundation, it is just a whole in the ground that was covered in some cement.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}