There is a new release of NHibernate available, download it now. It contains a host of great new features, like support for Dependency Injection for entities using an inversion of control container of your choosing. There is also a new ANTLR based HQL parser that has allowed for some HQL improvements, like the with clause. The new ANTLR based HQL parser is also central to the forthcoming LINQ support and is the result of some great work by Steve Strong and Fabio.
This release also includes the long sought after support for executable bulk queries. This is a feature that the java (Hibernate) version has had for some time and is now fully ported to NHibernate.
I have been doing more and more talks lately. I am not a natural speaker I usually need to practice a few times before in order to talk more fluently. But practice has made me more comfortable with it and I feel that I am getting better at it. Two days ago I held a long (3.5 hour) talk on ASP.NET MVC which was both my longest talk and my most successful, at least considering the positive response a got, which was very encouraging.
The talk was mostly a long code demo. One thing that can often kill code demos are that they can slow down the tempo of a presentation when there is too much typing of unimportant text/code. Like creating a new class, constructor, etc and then later getting to the really important part of a particular function.
In this MVC code demo I tried to have as much prepared as possible. I started with a standard MVC template project, but had hidden (excluded from the project) controllers and views which I included as the code demo progressed. These controllers/views included some existing functionality which I then expanded upon. That way I did not need to type class definitions defining simple controller actions and views before getting to the interesting bits.
I did the same with many other parts of the presentation, for example when explaining how to unit test controller actions I already had an almost empty test method already written, and only needed to show how to unit test the controller actions and how to assert on the result.
I also had code snippets in the toolbox for some of the tricky parts of the code demo that I could use if something did not work or I felt that it took to long to write. Never let a problem in the code demo completely halt the presentation, have a backup plan or just move a long if you cannot fix the problem on the first 2 tries.
I feel that I still have much to work on when it comes to presentation technique. I often talk a little to fast, need to focus on keeping calm and talking in a slow and articulate manner. It doesn’t matter how nice your powerpoint or code demo is if the audience can’t hear what you are saying!
Next up is trying keynote, nice to see how it compares to powerpoint.
I listened to the panel discussion on the pros and cons of stored procedures from the currently ongoing TechEd09 today. It was not what I hoped for, the panel consisted almost exclusively of pro stored procedure people with the exception of Jeffrey Palermo who for an NHibernate guy appeared very pro stored procedure.
I was hoping for a more balanced debate. The arguments were to much focused on the real vs. perceived benefits of stored procedures in terms of performance, database coupling, vendor coupling, security etc.
The really big issues I have personally with stored procedures (sprocs from now on) were never fully brought up. Especially when you compare sprocs and a manually coded DAL (which I find is the most common) with NHibernate.
Code duplication In my experience systems which rely heavily on sprocs also show a large amount of code duplication, for example duplication of SQL queries in the same sprocs in the case of dynamic queries that filter on different columns based on input. I have seen sprocs that feature the same basic query duplicated 12 times with small variation in the where/order clause. Also duplication between different sprocs can usually be very high. And the final point is the sad fact that sprocs usually contain some business logic, logic that sometimes also exist in the application itself.
Productivity & Lines of code This topic was also not really touched upon. Data access layers which use sprocs often feature massively more amount of code to deal with calling the sprocs and mapping them to entities. The amount of TSQL you need to write for the basic CRUD sprocs is also a huge time waster and possible maintenance nightmare.
Some of these issues could be argued that it is just incompetent programmers/DBAs and sprocs are not to blame. Maybe it is not fair to compare sprocs with an ORM like NHibernate. But I think you can compare having to write and maintain sprocs compared to letting NHibernate generate adhoc SQL. Sure I think sprocs still have their usage in specific and relatively rare scenarios but the panel discussion too often concluded on the wishy washy "it depends". Of course it depends, context is everything (as Scott Bellware always says), but that does not mean that one method shouldn't be the preferred "best practice" choice.
Sorry for the rant. Kind of frustrated with a current legacy system (which uses sprocs) :)
The picture to the right shows a hand that illustrates how the cross product of two vectors (a and b) generate a vector that is perpendicular to both a and b. Now try to imagine four vectors that are all perpendicular to each other. This is kind of tricky, mainly because it is impossible for four 3-dimensional vectors to all be perpendicular to each other.
But it is not impossible if the vectors are 4-dimensional, however that creates another problem: it is (at least for me) not possible to mentally picture a 4-dimensional space. Luckily I don’t have to the math works anyway :) The reason I post this is that I am porting an old 4D Julia Raytracer from C++ to C# and was just struck by this magic function:
/// <summary>/// Calculates a quaternion that is perpendicular to three other quaternions. /// Quaternions are handled as vectors in 4d space/// </summary>publicstaticQuaternion Cross4D(Quaternion q1,Quaternion q2,Quaternion q3)
{
double b1c4=q2.r*q3.k-q2.k*q3.r;
double b1c2=q2.r*q3.i-q2.i*q3.r;
double b1c3=q2.r*q3.j-q2.j*q3.r;
double b2c3=q2.i*q3.j-q2.j*q3.i;
double b2c4=q2.i*q3.k-q2.k*q3.i;
double b3c4=q2.j*q3.k-q2.k*q3.j;
var r = -q1.i*b3c4+q1.j*b2c4-q1.k*b2c3;
var i = q1.r*b3c4-q1.j*b1c4+q1.k*b1c3;
var j = -q1.r*b2c4+q1.i*b1c4-q1.k*b1c2;
var k = q1.r*b2c3-q1.i*b1c3+q1.j*b1c2;
returnnewQuaternion(r, i, j, k);
}
This is a function that calculates a cross product for a 4-dimensional vector (the Quaternion class is used as a 4-dimensional vector in imaginary space). The reason for the naming of variables in the calculation relates to how the cross product formula is derived (as the determinant of a matrix). Anyway, I just found it funny that no matter how hard I try I cannot picture what this function actually generates. This is probably nothing new for mathematicians or physicist who I guess daily has to fight against the limitations of the human mind.
But the math works, I can position the camera in 4D space and render pictures of the 4-dimensional Julia Set :)
On a side note this app was MUCH easier to parallelize (using Parallel.For from the Parallel Extensions Library) than GenArt. Because the algorithm works like a raytracer the outer ray casting loop is easily implemented using Parallel.For which instantly gave a 4x performance increase on my quad core CPU.
I took some time to upload some of the old animations to youtube, they were rendered many years ago using the C++ version.
Here is an animation of a camera move around the Julia set, the camera is moving in the second (i) and fourth (k) imaginary dimension.
Here is another one, which I really like, the Julia constant is moving in a small circle in the first and second dimension. It gives me a strange impression of something organic and fluid. When I and my friend presented this rendering technique the first slide had this animation in a loop :)
You can see an interesting artifact of the rendering algorithm in the video above. The Julia Set actually hits the camera plane. The camera has a near plane where we start traversing the rays and a far plane where we stop, what happens is that the middle expands beyond the near plane, creating a flat surface.
Here are some other rendered animations that I uploaded to youtube:
How do you grade how well an application is written?
There are many factors that will play a role in such an evaluation, for example (in no particular order):
Does the design follow object-oriented principles?
Does it work? (i.e. few bugs)
Does the code have unit tests?
Is the code clean (easy to read, small functions)
How much code duplication is there?
Is there valuable code comments?
Of all those qualities I will have to say readability is the most important quality, I don’t care how procedural the code is so long as the functions are small and the conditional logic written in a such a way that it is easy to follow. Don’t get me wrong I value object-orientation, the S.O.L.I.D principles and unit tests a great deal. But the majority of existing systems I have been tasked to maintain and develop new functions in seldom exhibited any those qualities.
What is usually found is a mess of procedural code where most of the code is located in the codebehind and with some common code moved to static methods on helper classes).
What constantly surprises me is how competent and intelligent developers can create complex and functioning systems but fail to grasp the simplest of methodologies of writing readable code.
Here is some really bad code which has some of the characteristics that constantly frustrates me:
The code above is not real, I actually wrote it just show what I mean. I am having a hard time understanding how people can write code like above. Why include the namespaces in everything? This is something I constantly see and I have never understood the reason for it. Then there are the classics, like complicated conditionals that don't contain any information as what the intent of the condition is. Exception handling is also misunderstood and misused. I often find excessive capturing of exceptions, it’s like try/catch is used like guard clauses, this is very frustrating because it makes debugging and troubleshooting very painful.
Anyway, this post was not supposed to be a rant on bad code but about my realization that readability is the quality that I value most. Sure you get frustrated with procedural code that could have been so much simplified and reduced if some object orientation principles where applied but at least you don’t get a headache when trying to decipher code that is readable :)
I have spent the last couple of days trying to find ways to parallelize GenArt WPF using Parallel.For (from the Parallel Extensions Library). In the process I stumbled upon a scenario where using Lambdas/anonymous delegates can have pretty substantial performance implications.
The function above was called in a while loop until a mutation hit was generated. I was not seeing the CPU utilization I was expecting. I was expecting 100% but got around 80% which I found strange, there was nothing that I could see that would cause a thread lock. To find the cause I started commenting out code. It was when I commented out the code above that I immediately saw the CPU utilization jump 100%. It must have been the garbage collector that caused the decrease in CPU utilization. Why the garbage collector? Well the code above will actually compile to something very different.
Something like this (reconstructed approximation of the generated IL):
As you see the C# compiler actually creates a separate class to hold the lambda method body, a class that it will be instantiated every time the Mutate method is called. The reason for this is that it needs to capture the local variables (this is was makes lambdas/anonymous delegates true closures). I was well aware that this was happening but I have never encounter a situation where this has had any noticeable performance implications, until now that is.
The fact that lambda methods that use local variables will result in an instantiation of a new object should not be a problem 99% of the time, but as this shows it is worth being aware of because in some cases it can matter a great deal.
Developer Summit is a great developer conference held in Stockholm each year. This time it is being held in April spread over 3 days with two conference days between 15-16 April and one workshop day on the 17th. I am going to have talk about Dependency Inversion (the pattern/principle) and how this pattern can help you create more loosely coupled applications. The talk will also be about what Inversion of Control containers are good for and how to use them effectively.
I will also host a workshop about test driven development with ASP.NET MVC. A workshop that will be focusing on the testability aspects of ASP.NET MVC. Lab assignments could for example start with an empty controller test. I will go into scenarios where you need to use mocking/stubbing and scenarios where the MVC framework cleverly avoids mocking (by passing FormCollection to the action for example). There is also going to be lab assignments and examples that show how to use WatiN for integration testing.
Other interesting talks:
Good Test, Better Code by Scott Bellware
A Technical Drilldown into “All Things M” by Brian Loesgen
RESTful Enterprise Integration with Atom and AtomPub by Ian Robinson
There are many more interesting talks, so be sure the sign up if you can.
My name is Torkel Ödegaard. I am a developer, who is very passionate about agile software development, TDD, open source and continuous learning. I live in Stockholm, Sweden.