Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

[email protected] +972 52-548-6969

Posts: 7,527
|
Comments: 51,162
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 338 words

Update: Yes, I am crazy! Turn out that I forgot to do "command.Transaction = tx;" and then I went and read some outdated documentation, and got the completely wrong picture, yuck! I still think that requiring "command.Transaction = tx;" is bad API design and error prone (duh!).

Someone please tell me that I am not crazy. The output out this program is:

Wrote item
Wrote item
Wrote item
3
Wrote item
Wrote item
4
Wrote item
Wrote item
5
Wrote item
Wrote item
6
Wrote item

This is wrong on so many levels...

public class Program
{
	const string connectionString = "Data Source=test.dsf";

	public static void Main(string[] args)
	{
		File.Delete("test.dsf");
		var engine = new SqlCeEngine(connectionString);
		engine.CreateDatabase();

		using (var connection = new SqlCeConnection(connectionString))
		{
			connection.Open();
			SqlCeCommand command = connection.CreateCommand();
			command.CommandText = @"CREATE TABLE Test(Id INT IDENTITY PRIMARY KEY, Name NVARCHAR(25) NOT NULL)";
			command.ExecuteNonQuery();
		}

		ThreadPool.QueueUserWorkItem(ReadFromDb);

		using (var connection = new SqlCeConnection(connectionString))
		{
			connection.Open();
			using(IDbTransaction tx = connection.BeginTransaction(IsolationLevel.Serializable))
			{
				while(true)
				{
					using (SqlCeCommand command = connection.CreateCommand())
					{
						command.CommandText = @"INSERT INTO Test(Name) VALUES('A');";
						command.ExecuteNonQuery();
					}
					Console.WriteLine("Wrote item");
					Thread.Sleep(500);
				}
			}
		}
	}

	private static void ReadFromDb(object state)
	{
		Thread.Sleep(1000);
		using (var connection = new SqlCeConnection(connectionString))
		{
			connection.Open();
			using (IDbTransaction tx = connection.BeginTransaction(IsolationLevel.Serializable))
			{
				while (true)
				{
					using (SqlCeCommand command = connection.CreateCommand())
					{
						command.CommandText = @"SELECT COUNT(*) FROM Test;";
						Console.WriteLine(command.ExecuteScalar()); 
					}
					Console.WriteLine("Wrote item");
					Thread.Sleep(500);
				}
			}
		}
	}
}
time to read 9 min | 1710 words

image

It gives me great pleasure to announce that NHiberante 2.0 Alpha 1 was released last night and can be downloaded from this location.

We call this alpha, but many of us are using this in production, so we are really certain in its stability. The reason that this is an alpha is that we have made a lot of changes in the last nine months (since the last release), and we want to get more real world experience before we ship this. Recent estimates are of about 100,000 lines of code has changed since the last release.

You can see the unofficial change list below. Please note that there are breaking changes in the move to NHibernate 2.0. There are also significant improvement in many fields, as you will see in a moment.

We are particularly interested in hearing about compatibility issues, performance issues and "why doesn't this drat work?" issues.

We offer support for moving to NHibernate 2.0 Alpha on the NHibernate mailing list at: [email protected] (http://groups.google.com/group/nhusers).

And now, for the changes:

  • New features:
    • Add join mapping element to map one class to several tables
    • <union> tables and <union-subclass> inheritance strategy
    • HQL functions 'current_timestamp', 'str' and 'locate' for PostgreSQL dialect
    • VetoInterceptor - Cancel Calls to Delete, Update, Insert via the IInterceptor Interface
    • Using constants in select clause of HQL
    • Added [ Table per subclass, using a discriminator ] Support to Nhibernate
    • Added support for paging in sub queries.
    • Auto discovery of types in custom SQL queries
    • Added OnPreLoad & OnPostLoad Lifecycle Events
    • Added ThreadStaticSessionContext
    • Added <on-delete> tag to <key>
    • Added foreign-key="none" since the Parent have not-found="ignore". (not relevant to SQL Server)
    • Added DetachedQuery
    • ExecuteUpdate support for native SQL queries
    • From Hibernate:
      • Ported Actions, Events and Listeners
      • Ported StatelessSession
      • Ported CacheMode
      • Ported Statistics
      • Ported QueryPlan
      • Ported ResultSetWrapper
      • Ported  Structured/Unstructured cache
      • Ported SchemaUpdate
      • Ported Hibernate.UserTypes
      • Ported Hibernate.Mapping
      • Ported Hibernate.Type
      • Ported EntityKey
      • Ported CollectionKey
      • Ported TypedValue
      • Ported SQLExceptionConverter
      • Ported Session Context
      • Ported CascadingAction
  • Breaking changes:
    • Changed NHibernate.Expression namespace to NHibernate.Criterion
    • Changed NHiberante.Property namespace to NHiberante.Properties
    • No AutoFlush outside a transaction - Database transactions are never optional, all communication with a database has to occur inside a transaction, no matter if you read or write data.
    • <nhibernate> section is ignored, using <hibernate-configuration> section (note that they have different XML formats)
    • Configuration values are no longer prefixed by "hibernate.", if before you would specify "hibernate.dialect", now you specify just "dialect"
    • IInterceptor changed to match the Hibernate 3.2 Interceptor - interface changed
    • Will perform validation on all named queries at initialization time, and throw if any is not valid.
    • NHibernate will return long for count(*) queries on SQL Server
    • SaveOrUpdateCopy return a new instance of the entity without change the original.
    • INamingStrategy interface changed
    • NHibernate.Search - Moved Index/Store attributes to the Attributes namespace
    • Changes to IType, IEntityPersister, IVersionType - of interest only to people who did crazy stuff with NHibernate.
    • <formula> must contain parenthesis when needed
    • IBatcher interface change
  • Fixed bugs:
    • Fixing bug with HQL queries on map with formula.
    • Fixed exception when the <idbag> has a <composite-element> inside; inside which, has a <many-to-one>
    • Multi criteria doesn't support paging on dialects that doesn't support limiting the query size using SQL.
    • Fixed an issue with limit string in MsSql2005 dialect sorting incorrectly on machines with multiple processors
    • Fixed an issue with getting NullReferenceException when using SimpleSubqueryExpression within another subexpression
    • Fixed Null Reference Exception when deleting a <list> that has holes in it.
    • Fixed duplicate column name on complex joins of similar tables
    • Fixed MultiQuery force to use parameter in all queries
    • Fixed concat function fails when a parameter contains a comma, and using MaxResults
    • Fixed failure with Formula when using the paging on MSSQL 2005 dialect
    • Fixed PersistentEnumType incorrectly assumes enum types have zero-value defined
    • Fixed SetMaxResults() returns one less row when SetFirstResult() is not used
    • Fixed Bug in GetLimitString for SQL Server 2005 when ordering by aggregates
    • Fixed SessionImpl.EnableFilter returns wrong filter if already enabled
    • Fixed Generated Id does not work for MySQL
    • Fixed one-to-one can never be lazy
    • Fixed FOR UPDATE statements not generated for pessimistic locking
  • Improvements:
    • Added Guid support for Postgre Dialect
    • Added support for comments in queries
    • Added Merge and Persist to ISession
    • Support IFF for SQL Server
    • IdBag now work with Identity columns
    • Multi Criteria now uses the Result Transformer
    • Handling key-many-to-one && not-found
    • Can now specify that a class is abstract in the mapping.
  • Guidance:
    • Prefer to use the Restrictions instead of the Expression class for defining Criteria queries.
  • Child projects:
    • Added NHibernate.Validator
    • Added NHibernate.Shards
    • NHibernate.Search updated match to Hibernate Search 3.0
  • Criteria API:
    • Allow Inspection, Traversal, Cloning and Transformation for ICriteria and DetachedCriteria
      • Introduced CriteriaTransformer class
      • GetCriteriaByPath, GetCriteriaByAlias
    • Added DetachedCriteria.For<T>
    • Added Multi Criteria
    • Projections can now pass parameters to the generated SQL statement.
    • Added support for calling Sql Functions (HQL concept) from projections (Criteria).
    • Added ConstantProjection
    • Added CastProjection
    • Can use IProjection as a parameter to ICriterion
  • Better validation for proxies:
    • Now supports checking for internal fields as well
    • Updated Castle.DynamicProxy2.dll to have better support for .NET 2.0 SP1
  • SQL Lite:
    • Support for multi query and multi criteria
    • Supporting subselects and limits
    • Allowed DROP TABLE IF EXISTS semantics
  • PostgreSQL (Npgsql):
    • Enable Multi Query support for PostgreSQL
  • FireBird:
    • Much better overall support
  • Batching:
    • Changed logging to make it clearer that all commands are send to the database in a single batch.
    • AbstractBatcher now use the Interceptor to allow the user intercept and change an SQL before it's preparation
  • Error handling:
    • Better error message on exception in getting values in Int32Type
    • Better error message when using a subquery that contains a reference to non existing property
    • Throws a more meaningful exception when calling UniqueResult<T>() with a value type and the query returns null.
    • Overall better error handling
    • Better debug logs
  • Refactoring:
    • Major refactoring internally to use generic collections instead of the non generic types.
    • Major refactoring to the configuration and the parsing of hbm files.
  • Factories:
    • Added ProxyFactoryFactory
    • Added BatchingBatcherFactory
time to read 5 min | 813 words

I am doing some work on the DSL book right now, and I run into this example, which is simple too delicious not to post about.

Assume that you have the following UI, which you use to let a salesperson generate a quote for your system.

image

This is much more than just a UI issue, to be clear. You have fully fledged logic system here. Calculating the total cost is the easy part, first you have to understand what you need.

Let us define a set of rules for the application, is will be clearer when we have the list in front of us:

  • The Salary module requires a machine per every 150 users.
  • The Taxes module requires a machine per 50 users.
  • The Vacations module requires the Scheduling Work module.
  • The Vacations module requires the External Connections module.
  • The Pension Plans module requires the External Connections module.
  • The Pension Plans module must be on the same machine as the Health Insurance module.
  • The Health Insurance module requires the External Connections module.
  • The Recruiting module requires a connection to the internet, and therefore requires a fire wall of the recommended list.
  • The Employee Monitoring module requires the CompMonitor component

Of course, this fictitious sample is still too simple, we can probably sit down and come up with fifty or so more rules that we need to handle. Just handling the second level dependencies (External Connections, CompMonitor, etc) would be a big task, for example.

Assume that you have not a single such system, but 50 of them. I know of a company that spent 10 years and has 100,000 lines of C++ code (that implements a poorly performing Lisp machine, of course) to solve this issue.

My solution?

specification @vacations:
	requires @scheduling_work
	requires @external_connections
	
specification @salary:
	users_per_machine 150
	
specification @taxes:
	users_per_machine 50

specification @pension:
	same_machine_as @health_insurance

Why do we need a DSL for this? Isn’t this a good candidate for data storage system? It seems to me that we could have expressed the same ideas with XML (or a database, etc) just as easily. Here is the same concept, now express in XML.

<specification name="vacation">
	<requires name="scheduling_work"/>
	<requires name="external_connections"/>
</specification>

<specification name="salary">
	<users_per_machine value="150"/>
</specification>

<specification name="taxes">
	<users_per_machine value="50"/>
</specification>

<specification name="pension">
	<same_machine_as name="health_insurance"/>
</specification>

That is a one to one translation of the two, why do I need a DSL here?

Personally, I think that the DSL syntax is nicer, and the amount of work to get from a DSL to the object model is very small compared to the work required to translate to the same object model from XML.

That is mostly a personal opinion, however. For pure declarative DSL, we are comparable with XML in almost all things. It gets interesting when we decide that we don’t want this purity. Let us add a new rule to the mix, shall we?

  • The Pension Plans module must be on the same machine as the Health Insurance module, if the user count is less than 500.
  • The Pension Plans module requires distributed messaging backend, if the user count is great than 500.

Trying to express that in XML can be a real pain. In fact, it means that we are trying to shove programming concepts into the XML, which is always a bad idea. We could try to put this logic in the quote generation engine, but that is complicating it with no good reason, tying it to the specific application that we are using, and in general making a mess.

Using our DSL (with no modification needed), we can write it:

specification @pension: 
	if information.UserCount < 500: 
		same_machine_as @health_insurance 
	else: 	
		requires @distributed_messaging_backend

As you can imagine, once you have run all the rules in the DSL, you are left with a very simple problem to solve, with all the parameters well known.

In fact, throughout the process, there isn't a single place of overwhelming complexity.

I like that.

time to read 1 min | 92 words

Just found myself writing that, and it was amusing.

import System.Net
import System.IO

if argv.Length != 2:
	print "You must pass [prefix] [path] as parameters"
	return

prefix = argv[0]
path = argv[1]

if not Directory.Exists(path):
	print "Could not find ${path}"
	return

listener = HttpListener()
listener.Prefixes.Add(prefix)
listener.Start()

while true:
	context = listener.GetContext()
	file = Path.GetFileName(context.Request.RawUrl)
	fullPath = Path.Combine(path, file)
	if File.Exists(fullPath):
		context.Response.AddHeader("Content-Disposition","attachment; filename=${file}")
		bytes = File.ReadAllBytes(fullPath)
		context.Response.OutputStream.Write(bytes, 0, bytes.Length)
		context.Response.OutputStream.Flush()
		context.Response.Close()
	else:
		context.Response.StatusCode = 404
		context.Response.Close()
time to read 1 min | 94 words

Well, after a long time of bugging him about it, I finally decided to give Craig the first Hostile Blogging Award. So Craig has a blog now, which is wonderful.

Who is Craig and why should you care to read what he is thinking about?

  • A friend
  • Committer to both Castle Project and Rhino Tools
  • Main guy behind Binsor 2.0
  • Main guy behind Zero Config WCF
  • All around interesting guy

Subscribed, and very excited.

time to read 2 min | 271 words

image

Well, I was toying around with the idea for about a month or so, and finally I got around to actually record & editing that.

Highlights:

  • Vastly improved sound quality. I think you'll enjoy it.
  • Vastly extended in time & scope. For some reason, this screencast is longer than many full length movies. We also write our own bus implementation from scratch, and discuss the implementation details there.
  • This is more of a low level discussion, not a high level architectural discussion about why you want a bus (well, I do talk about it a bit, but mostly we implement the bus).
  • The first 45 minutes are dedicated to moving from an old style RPC to an async batching bus approach, that still uses the request / reply. The rest is dedicated to building the one way, message passing, queue based, service bus.
    • There are some interesting challenges there, and I hope you'll make sense of my grunts as I write the code.
    • The last hour or so of the screen cast it live coding, and you get to see how I revert some design decisions as they turn out to be problematic.

The technical details:

  • Total length: An hour and forth minutes(!)
  • Size: 160 MB
  • Code starts on 04:31

Go to download page

FUTURE POSTS

  1. RavenDB Performance: 15% improvement in one line - 15 hours from now

There are posts all the way to Dec 02, 2024

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}