One of NHiberantes features that I haven’t seen mentioned in the documentation or in blogs concerns the the two methods on ISession SaveOrUpdateCopy and Merge.

Lets say we have an backend (i.e. application server) that has an operation called UpdateOrder.


The UpdateOrder message contains a complete order. The normal scenario here is that the backend translate the order contained in the message to a domain model that is then persisted to the database via NHibernate. The problem with an update scenario like this is that the order that is coming in from the client could have missing order lines. If you use the normal ISession SaveOrUpdate method the order line that was removed on the client and therefore missing in the UpdateOrder message will not be deleted in the database.

Why won’t the missing order line be deleted from the db? Well consider this normal update scenario:


Here the order line is removed on the instance that is already attached to an open NHiberante session. In this case SaveOrUpdate will work perfectly because NHiberante can track the removal of the order line.

Consider this case (that represents the client –> backend scenario I mentioned above)


Here we try to save a detached instance which means that NHibernate’s dirty tracking and tracking of collection removals will not work.

How do you solve this problem? One approach is to fetch the order from the db and do a manual merge of the changes.  The current system that I am working on has a data access layer that is using LinqToSql and there are many, many update scenarios as described above. The amount of code to manually merge and figure out what has happened with all relations (added / removed order lines for example) is quite substantial.

For the last month we have been bit by bit migrating the data access layer to NHibernate. First I thought that this issue of updating detached objects would be a problem that we still needed to solve manually but then I discovered SaveOrUpdateCopy and Merge. These two functions does exactly what the old DAL did manually, that is before the update it fetches the persisted object from the db and then merges all changes from the detached instance into the persisted instance automatically, including orphaned child deletions!



In the above code I modify an order line and remove another. Both operations are on a detached object. Then using SaveOrUpdateCopy we get this:


NHibernate fetches the order (in one statement by joining in the order lines), then performs the merge, figures out that one order line is updated and one is removed and issues the correct database calls. Is it just me but isn’t this great??? This will literally save thousands of lines of code! 

The ISession.Merge function basically does the same (from what I can tell). I am not sure really what separates them, except that there is cascade option named “merge” that you can set on relations to control how cascades should be propagated during merge operations.

Here is the API doc for Merge:

Copy the state of the given object onto the persistent object with the same
identifier. If there is no persistent instance currently associated with
the session, it will be loaded. Return the persistent instance. If the
given instance is unsaved, save a copy of and return it as a newly persistent
instance. The given instance does not become associated with the session.
This operation cascades to associated instances if the association is mapped
with cascade="merge". The semantics of this method are defined by JSR-220.

I think SaveOrUpdateCopy is something that has exited in NHiberante for all time and Merge is something added in 2.1 (clearly something ported from the hibernate). Anyway I am very glad that NHibernate has this ability because writing and handling the merge operation manually is very boring code to write! 

One useful feature in WPF 4.0 is the ability to databind to dynamic (runtime generated) properties using the DynamicObject as a base class or implementing the IDynamicMetaObjectProvider interface. I am currently working on a WPF application and this ability to bind to runtime generated properties would have been very useful in a previous story we implemented two weeks ago.

The story concerned merging two object graphs and then visualizing what properties that were changed/conflicted in the UI (for example with a different color).

In order to not add “XXX_HasMergeChange” for every property in the presentation model we solved this by using a value converter and some WPF binding magic that some might call a HACK. The solution was only partial as it only worked in the Grid and not on everything else.

If we had WPF 4.0 we could have solved this like this:


In the above style trigger the data trigger is binding to a property that doesn’t exist on the presentation model. How does WPF then get the value for this property? By calling the TryGetMember method:


WPF will call the TryGetMember and that will check if the property ends with “_HasMergeChange”, if that is the case it will try to look up the property in the MergeChanges dictionary.

The above is just a simple proof of concept, if I would go forward with this I would have to figure out a more generic way to define the style and data trigger to be able to reuse the xaml style markup for example, but that shouldn’t be a big problem. I also tested property change notifications using the INotifyPropertyChanged interface and they work for dynamic properties as well.

To learn more about the new features in WPF 4.0 read ScottGu’s recent post.

Among the Linq extensions methods that came with .NET 3.5 is one called Except. This method takes two lists (first and second)

The MSDN docs say:

This method returns those elements in first that do not appear in second. It does not also return those elements in second that do not appear in first.

This appears to be a lie. Review the code below and guess the output:


The User object has overridden the Equals and GetHashCode methods which the Except method use to determine equality. Since there is only one object in list2 that is also “equal” to an object in list1 I would expect (granted the MSDN docs are correct) that list3 would contain three users with the id zero.

The actual result? list3 will only contain ONE user object (with id zero). When I debug I see that Equals is called to compare objects in list1 with each other.

Using reflector I can see why, Except is implemented using the internal class System.Linq.Set:


It starts by adding all items from list2 into the set class then for each item from list1 that can be added to the set it yield returns. This filters all items from list1 that are equal to an item in list2 BUT it also filters all items in list1 that are equal any other item in list1!

Maybe not such a common usage scenario, and I don’t recommend overriding Equals and GetHashCode in this manner. Anyway frustrated by this because I burnt an hour on debugging before figured out it that what was to blame. 

NHibernate has pretty good support for batching, something that can significantly increase performance when inserting or updating large number of objects.



In the above example you can see that the order lines are created in one statement. In a recent mail conversation with Patrik Löwendahl he asked for assistance in getting batching to work. The first thing to check is what id generator you are using, you cannot use native (sql identity) id generator and expect batching to work for inserts. The reason for this that for identity inserts NHibernate issues a "select SCOPE_IDENTITY()" statement after each insert statement to fetch the generated ID. If you want to use batching for inserts you need to use the guid or hilo id generator.

Another issue i came across was that batching does not work as you would hope for associations. For example if you want to save a thousands orders and each order has five order lines, this would result in six thousands calls if batching was disabled and two thousands calls with batching enabled. As you see in the screenshot above, batching is only done on the order lines and not for everything.

You can optimize further by using the stateless session. However inserting entities using nhibernate’s stateless session ignores associations. But by looping through all orders and calling session.Insert(order), and then doing a nested loop to do the same for all order lines you can insert all orders and order lines in just two calls to the database.

The problem Patrik had was very weird and confusing. To verify that batching was actually happening he used SQL Profiler, while I used NHProfiler. The weird thing is that they show a very different picture. NHProfiler shows order lines as being issued in one command while SQL Profiler shows them as separate RPC calls.



This result left me very confused. The NHProfiler result clearly indicates that batching is being used but SQL Profiler shows the same results as when batching is off. However when batching is enabled the performance is significantly better, what is going on here?? After some Googling on SQL Profiler and batching I found this comment on stackoverflow:

On MS SQL Server, SQL Profiler shows each insert statement seems to be on it's own. After reviewing your comment, I viewed a TCP Dump of the conversation and do see that it is batching multiple commands together. SQL Profiler shows each insert as a "RPC Completed" event which was confusing me. Thanks for your help.

I appears that batching IS being done but not like I thought it would be (for example a Batch Starting command in SQL Profiler). The difference, when batching is turned on, is that all the statements are sent to the database in one go without waiting to listen for a response. That explains the SQL Profiler result, however I still find the NHProfiler result puzzling as it indicates that the order lines are created using a single call to sp_executesql.


Ayende care to explain? :)

Some links on NHibernate batching:

image There is a new release of NHibernate available, download it now. It contains a host of great new features, like support for Dependency Injection for entities using an inversion of control container of your choosing. There is also a new ANTLR based HQL parser that has allowed for some HQL improvements, like the with clause.  The new ANTLR based HQL parser is also central to the forthcoming LINQ support and is the result of some great work by  Steve Strong and Fabio.

This release also includes the long sought after support for executable bulk queries. This is a feature that the java (Hibernate) version has had for some time and is now fully ported to NHibernate.

For a complete list of new features: link

image I have been doing more and more talks lately. I am not a natural speaker I usually need to practice a few times before in order to talk more fluently. But practice has made me more comfortable with it and I feel that I am getting better at it. Two days ago I held a long (3.5 hour) talk on ASP.NET MVC which was both my longest talk and my most successful, at least considering the positive response a got, which was very encouraging.

The talk was mostly a long code demo. One thing that can often kill code demos are that they can slow down the tempo of a presentation when there is too much typing of unimportant text/code. Like creating a new class, constructor, etc and then later getting to the really important part of a particular function.

In this MVC code demo I tried to have as much prepared as possible. I started with a standard MVC template project, but had hidden (excluded from the project) controllers and views which I included as the code demo progressed. These controllers/views included some existing functionality which I then expanded upon. That way I did not need to type class definitions defining simple controller actions and views before getting to the interesting bits.

I did the same with many other parts of the presentation, for example when explaining how to unit test controller actions I already had an almost empty test method already written, and only needed to show how to unit test the controller actions and how to assert on the result.

I also had code snippets in the toolbox for some of the tricky parts of the code demo that I could use if something did not work or I felt that it took to long to write. Never let a problem in the code demo completely halt the presentation, have a backup plan or just move a long if you cannot fix the problem on the first 2 tries.

I feel that I still have much to work on when it comes to presentation technique. I often talk a little to fast, need to focus on keeping calm and talking in a slow and articulate manner.  It doesn’t matter how nice your powerpoint or code demo is if the audience can’t hear what you are saying!

Next up is trying keynote, nice to see how it compares to powerpoint.

image I listened to the panel discussion on the pros and cons of stored procedures from the currently ongoing TechEd09 today. It was not what I hoped for, the panel consisted almost exclusively of pro stored procedure people with the exception of Jeffrey Palermo who for an NHibernate guy appeared very pro stored procedure.

I was hoping for a more balanced debate. The arguments were to much focused on the real vs. perceived benefits of stored procedures in terms of performance, database coupling, vendor coupling, security etc.

The really big issues I have personally with stored procedures (sprocs from now on) were never fully brought up. Especially when you compare sprocs and a manually coded DAL (which I find is the most common) with NHibernate.

Code duplication
In my experience systems which rely heavily on sprocs also show a large amount of code duplication, for example duplication of SQL queries in the same sprocs in the case of dynamic queries that filter on different columns based on input. I have seen sprocs that feature the same basic query duplicated 12 times with small variation in the where/order clause. Also duplication between different sprocs can usually be very high. And the final point is the sad fact that sprocs usually contain some business logic, logic that sometimes also exist in the application itself.

Productivity & Lines of code
This topic was also not really touched upon. Data access layers which use sprocs often feature massively more amount of code to deal with calling the sprocs and mapping them to entities. The amount of TSQL you need to write for the basic CRUD sprocs is also a huge time waster and possible maintenance nightmare.

Some of these issues could be argued that it is just incompetent programmers/DBAs and sprocs are not to blame. Maybe it is not fair to compare sprocs with an ORM like NHibernate. But I think you can compare having to write and maintain sprocs compared to letting NHibernate generate adhoc SQL. Sure I think sprocs still have their usage in specific and relatively rare scenarios but the panel discussion too often concluded on the wishy washy "it depends". Of course it depends, context is everything (as Scott Bellware always says), but that does not mean that one method shouldn't be the preferred "best practice" choice.

Sorry for the rant. Kind of frustrated with a current legacy system (which uses sprocs) :)

507px-Right_hand_rule_cross_product[6]The picture to the right shows a hand that illustrates how the cross product of two vectors (a and b) generate a vector that is perpendicular to both a and b. Now try to imagine four vectors that are all perpendicular to each other. This is kind of tricky, mainly because it is impossible for four 3-dimensional vectors to all be perpendicular to each other.

But it is not impossible if the vectors are 4-dimensional, however that creates another problem: it is (at least for me) not possible to mentally picture a 4-dimensional space. Luckily I don’t have to the math works anyway :) The reason I post this is that I am porting an old 4D Julia Raytracer from C++ to C# and was just struck by this magic function:

/// <summary>
/// Calculates a quaternion that is perpendicular to three other quaternions. 
/// Quaternions are handled as vectors in 4d space
/// </summary>
public static Quaternion Cross4D(Quaternion q1,Quaternion q2,Quaternion q3)
  double b1c4=q2.r*q3.k-q2.k*q3.r;
  double b1c2=q2.r*q3.i-q2.i*q3.r;
  double b1c3=q2.r*q3.j-q2.j*q3.r;
  double b2c3=q2.i*q3.j-q2.j*q3.i;
  double b2c4=q2.i*q3.k-q2.k*q3.i;
  double b3c4=q2.j*q3.k-q2.k*q3.j;

  var r = -q1.i*b3c4+q1.j*b2c4-q1.k*b2c3;
  var i =  q1.r*b3c4-q1.j*b1c4+q1.k*b1c3;
  var j = -q1.r*b2c4+q1.i*b1c4-q1.k*b1c2;
  var k =  q1.r*b2c3-q1.i*b1c3+q1.j*b1c2;

  return new Quaternion(r, i, j, k);

This is a function that calculates a cross product for a 4-dimensional vector (the Quaternion class is used as a 4-dimensional vector in imaginary space). The reason for the naming of variables in the calculation relates to how the cross product formula is derived (as the determinant of a matrix). Anyway, I just found it funny that no matter how hard I try I cannot picture what this function actually generates. This is probably nothing new for mathematicians or physicist who I guess daily has to fight against the limitations of the human mind.

But the math works, I can position the camera in 4D space and render pictures of the 4-dimensional Julia Set :)


On a side note this app was MUCH easier to parallelize (using Parallel.For from the Parallel Extensions Library) than GenArt. Because the algorithm works like a raytracer the outer ray casting loop is easily implemented using Parallel.For which instantly gave a 4x performance increase on my quad core CPU.

I took some time to upload some of the old animations to youtube, they were rendered many years ago using the C++ version.

Here is an animation of a camera move around the Julia set, the camera is moving in the second (i) and fourth (k) imaginary dimension.


Here is another one, which I really like, the Julia constant is moving in a small circle in the first and second dimension. It gives me a strange impression of something organic and fluid. When I and my friend presented this rendering technique the first slide had this animation in a loop :)

You can see an interesting artifact of the rendering algorithm in the video above. The Julia Set actually hits the camera plane. The camera has a near plane where we start traversing the rays and a far plane where we stop, what happens is that the middle expands beyond the near plane, creating a flat surface.

Here are some other rendered animations that I uploaded to youtube:


How do you grade how well an application is written?

There are many factors that will play a role in such an evaluation, for example (in no particular order):

  • Does the design follow object-oriented principles?
  • Does it work? (i.e. few bugs)
  • Does the code have unit tests?
  • Is the code clean (easy to read, small functions)
  • How much code duplication is there?
  • Is there valuable code comments?

Of all those qualities I will have to say readability is the most important quality, I don’t care how procedural the code is so long as the functions are small and the conditional logic written in a such a way that it is easy to follow. Don’t get me wrong I value object-orientation, the S.O.L.I.D principles and unit tests a great deal. But the majority of existing systems I have been tasked to maintain and develop new functions in seldom exhibited any those qualities.

What is usually found is a mess of procedural code where most of the code is located in the codebehind and with some common code moved to static methods on helper classes). 

What constantly surprises me is how competent and intelligent developers can create complex and functioning systems but fail to grasp the simplest of methodologies of writing readable code.

Here is some really bad code which has some of the characteristics that constantly frustrates me:

System.Web.UI.HtmlControls.HtmlInputFile objFile =

  if(System.IO.Directory.Exists(strDir)==false ||
     Request["id"] != null && Request["id"] == "1" && isTpActive)
    MyApp.Business.Entities.Order order = 
catch(Exception ex)
  error = true;

The code above is not real, I actually wrote it just show what I mean. I am having a hard time understanding how people can write code like above. Why include the namespaces in everything? This is something I constantly see and I have never understood the reason for it. Then there are the classics, like complicated conditionals that don't contain any information as what the intent of the condition is. Exception handling is also misunderstood and misused. I often find excessive capturing of exceptions, it’s like try/catch is used like guard clauses, this is very frustrating because it makes debugging and troubleshooting very painful.

Anyway, this post was not supposed to be a rant on bad code but about my realization that readability is the quality that I value most. Sure you get frustrated with procedural code that could have been so much simplified and reduced if some object orientation principles where applied but at least you don’t get a headache when trying to decipher code that is readable :)

image I have spent the last couple of days trying to find ways to parallelize GenArt WPF using Parallel.For (from the Parallel Extensions Library). In the process I stumbled upon a scenario where using Lambdas/anonymous delegates can have pretty substantial performance implications.

The code in question looked something like this:
internal void Mutate(EvolutionContext context)
    context.IfMutation(MutationType.AddPoint, () =>

The function above was called in a while loop until a mutation hit was generated. I was not seeing the CPU utilization I was expecting. I was expecting 100% but got around 80% which I found strange, there was nothing that I could see that would cause a thread lock. To find the cause I started commenting out code. It was when I commented out the code above that I immediately saw the CPU utilization jump 100%. It must have been the garbage collector that caused the decrease in CPU utilization. Why the garbage collector? Well the code above will actually compile to something very different.

Something like this (reconstructed approximation of the generated IL):

public class c__DisplayClass1
    public GeneticPolygon __this;
    public EvolutionContext contex;

    public void Mutate__0()

internal void Mutate(EvolutionContext context)
    var lambdaClass = new c__DisplayClass1();
    lambdaClass.__this = this;
    lambdaClass.contex = context;

    context.IfMutation(MutationType.AddPoint, lambdaClass.Mutate__0);

As you see the C# compiler actually creates a separate class to hold the lambda method body, a class that it will be instantiated every time the Mutate method is called. The reason for this is that it needs to capture the local variables (this is was makes lambdas/anonymous delegates true closures). I was well aware that this was happening but I have never encounter a situation where this has had any noticeable performance implications, until now that is.

The fact that lambda methods that use local variables will result in an instantiation of a new object should not be a problem 99% of the time, but as this shows it is worth being aware of because in some cases it can matter a great deal.

imageDeveloper Summit is a great developer conference held in Stockholm each year. This time it is being held in April spread over 3 days with two conference days between 15-16 April and one workshop day on the 17th. I am going to have talk about Dependency Inversion (the pattern/principle) and how this pattern can help you create more loosely coupled applications. The talk will also be about what Inversion of Control containers are good for and how to use them effectively.

I will also host a workshop about test driven development with ASP.NET MVC. A workshop that will be focusing on the testability aspects of ASP.NET MVC. Lab assignments could for example start with an empty controller test. I will go into scenarios where you need to use mocking/stubbing and scenarios where the MVC framework cleverly avoids mocking (by passing FormCollection to the action for example). There is also going to be lab assignments and examples that show how to use WatiN for integration testing.

Other interesting talks:

  • Good Test, Better Code by Scott Bellware
  • A Technical Drilldown into “All Things M” by Brian Loesgen
  • RESTful Enterprise Integration with Atom and AtomPub by Ian Robinson

There are many more interesting talks, so be sure the sign up if you can.

I have been playing with the WPF framework Caliburn and just to have something fun to work on I ported Roger Alsings "EvoLisa" to WPF (from WinForms).


I have preciously ported this app to Direct3D, and to Silverlight, these ports did not work out as I had hoped (although the native C++ Direct3D port was pretty fast). The application UI architecture was inspired by NHibernate Profiler (I took a sneak peek at the code via reflector, I hope ayende don’t mind). It was the fact that NHibernate Profiler uses Caliburn that got me interested in Caliburn in the first place.

To checkout the code (Subversion): (This is just a experimental spike to learn wpf/caliburn so no unit tests)

I recently read CODE – The Hidden Language of Computer Hardware and Software by Charles Petzold. It was a great read and a book that I can recommend to anyone who whishes to understand how computers really works at the most basic level. The book goes into great detail on how binary systems work, and how computers use binary numbers to encode things like positive and negative numbers, alphabet characters, fractions, etc. But the main part of the book is about how to solve logical problems using simple relays (i.e. transistors) connected in different ways.

During the different chapters Petzold  is building a more and more complex logical machine that ultimately resembles how a real modern computer work. He starts out with building simple logical gates (AND, NAND, OR, etc) out of relays, he then combines these into more complex units, for example a 1-bit adder and a 1-bit latch.

Most of the stuff in the book was not really news to me but it was interesting non the less. I had forgotten how computers actually perform subtraction by using addition for example. It is pretty neat trick. The trick is to convert the number you are subtracting into two’s complement. Here is an example showing how you can calculate 7 – 5 with only using NOT and ADD operators.

image An easier way to understand how this works is to try it with the number range 0-59.

-  5
+  1
+  7

Here we actually have to use subtraction to calculate the complement, the nice thing about twos' complement is that it can be calculated using the binary NOT operator.

Ok, back to the book. The last chapters cover how computers are programmed, he describes in great detail how for example the stack work, how you call and pass parameters to subroutines and how interrupts are used to respond to hardware events like a button being pressed on a keyboard. Again nothing new to someone who has worked with assembly language or taken some basic classes in computer science, but it was nice to to refresh the knowledge and fill in some gaps.

I first bought this book to give to my father who has trouble understanding and working with computers. He is constantly frustrated by the simplest things so I thought that it would help to have some understanding of how computers work. However I am not going to give this book to my farther, after the first couple of chapters the book quickly becomes very technical and tedious for someone who isn't that interested. But for any programmer who doesn't already know the fundamentals of computer hardware and software or just want to refresh their knowledge it is a great read.

In Atwood's latest post The Ferengi Programmer, he argues that the OOP design guidelines and specifically Robert C Martin's S.O.L.I.D principles are rules that hinder critical thinking and can be dangerous. It is a complete straw man argument, there is no one who advocate that these principles should be viewed as absolute rules that should be followed blindly without critical thinking.

The most interesting and scary thing with Jeff's post are the comments, like:

"I'll tell you one thing, the Gang of Four book is probably one of my most disappointing programming reads of all time. Completely useless to me. Strange that I can have a successful programming career without understanding that book..."
Anyway, in Rob Conerey's response post there was this great comment by David Nelson in which he makes an analogy with chess rules:

"In chess there are a set of rules that are taught to every beginning player: a queen is worth three minor pieces, develop knights before bishops, always castle, etc. But as a player improves, he learns that these are not actually rules, they are generalities. Over the course of analyzing many hundreds of thousands of games, good players have discovered certain strategies that are more likely to lead to a winning position. But just because they are more likely to be better, doesn't mean they will always be better.

As a young player, I would often see an opportunity that I thought would lead to a quickly won game, by trying something other than what the "rules' would indicate. More often than not, I discovered that I was falling into a trap. Had I only followed the rules, I would have been better off. A player has to get very good before he can reliably understand when the rules don't apply.

The point is that just because I know that the rules don't always apply, doesn't mean that I should ignore them and go my own way. I have to factor in both my own experience, and the "rules", which are derived from the experience of thousands of players before me who were better than I am. And I have to weigh each of those factors appropriately. The better I get, the higher I can value my own experience. But even grandmasters work from a standard opening book.

I know of no other industry in the world where the craftsman are so strongly resistant to learning from the mistakes and lessons of those who have come before. I think that it is mostly the result of the technological boom in the last two decades; we haven't had time to develop the educational process to teach programmers what they need to know, but we need the warm bodies, so we will take anybody who will sign up. Even those who are ignorant and unwilling to actually learn what they're doing."

It is a great analogy, not that I know much about chess. I did read a book about chess strategy many years ago but can't say I remember much from it.  What I like about the analogy is the way it pictures guidelines and principles as a way to turn novice chess players into masters and how experience will eventually let you know the scenarios where the principles don’t apply. I also like the line “But even grandmasters work from a standard opening book” :)

For more comments, read Justin Etheredge response.

I have been working with a WPF app on my spare time. I decided to use the WPF application framework called Caliburn. Caliburn is a lightweight framework that aids WPF and Silverlight development considerably.

Caliburn Goals:

  • Support building WPF/SL application that are TDD friendly.
  • Implement functionality for simplifying various UI design patterns in WPF/SL. These patterns include MVC, MVP, Presentation Model (MVVM), Commands, etc.
  • Ease the use of a dependency injection container with WPF/SL.
  • Simplify or provide alternatives to common WPF/SL related tasks.
  • Provide solutions to common UI architecture problems.

How does Caliburn work? A big part of WPF is it’s strong data binding functionality, however WPF control event handlers are normally defined in the control or view code behind. Caliburn lets you route using a declarative syntax control events to normal methods on your data bound presentation model.


<UserControl x:Class="GenArt.Client.Views.TargetImageView"
            <RowDefinition Height="Auto" />
            <RowDefinition Height="*" />

        <Border Style="{StaticResource TargetImageBorder}">
            <Image x:Name="TargetImage" Source="..\Resources\Images\ml.bmp" Grid.Row="0" MinHeight="150" MinWidth="150"></Image>            
        <Grid Grid.Row="1">                            
            <StackPanel HorizontalAlignment="Center" VerticalAlignment="Center">
                <Button Height="Auto" Message.Attach="[Event Click] = [Action BrowseForTargetImage] : TargetImage.Source">Select Target Image</Button>
                <Button Height="Auto" Message.Attach="[Event Click] = [Action StartPainting] : ">Start Painting</Button>

The first interesting Caliburn part is the attribute Action.Target="{Binding}" set on the top UserControl. This tells Caliburn that the action target is the current data binding instance (that is a presentation model). The second is the attribute Message.Attach="[Event Click] = [Action StartPainting]” set on the last Button Control. This two is a Caliburn WPF extension to declaratively attach the button click event to the method StartPainting.

The StartPainting method is defined on the class named ApplicationModel (this is the top, root data bound class for the entire WPF app).

public class ApplicationModel : PropertyChangedBase, IApplicationModel
  private DrawingStatsModel stats;
  private PaintingCanvasModel paintingCanvas;

  public ApplicationModel(GenArtDispatcher dispatcher) : base(dispatcher)
      stats = new DrawingStatsModel(this, dispatcher);
      paintingCanvas = new PaintingCanvasModel(this, dispatcher);      

  public ImageSource BrowseForTargetImage()

  [AsyncAction(BlockInteraction = true)]
  public void StartPainting()

As you can see Caliburn can route WPF control events to normal methods, methods can have arguments taken from other WPF controls, methods can return values that Caliburn can use to update control properties (as in the case of the BrowseForTargetImage that returns a ImageSource). This is very powerful as it almost allows for an MVC like separation between UI and the underlying presentation behavior.


Caliburn also makes async actions dead simple, if you need to have a WPF event handled in a background thread (so that it doesn't lock the UI) you only need to add a AsyncAction attribute. When the BlockInteraction parameter is set to true Caliburn will disable the WPF control that initiated the event and re-enable it when the action completes.

Almost all logic in a WPF app should be handled in a background threads however all UI interactions need to be done on the main UI thread. This can be handled easily by using a Dispatcher and a base class PropertyChangedBase.
public abstract class PropertyChangedBase : INotifyPropertyChanged
    protected GenArtDispatcher dispatcher;

    public PropertyChangedBase(GenArtDispatcher dispatcher)
        this.dispatcher = dispatcher;

    #region INotifyPropertyChanged Members

    public event PropertyChangedEventHandler PropertyChanged;


    protected void RaisePropertyChanged(string propertyName)
        var ChangeEvent = new ChangeEvent();
        ChangeEvent.PropertyName = propertyName;
        ChangeEvent.Source = this;

    public void RaisePropertyChangedEventImmediately(string propertyName)
        if (PropertyChanged != null)
            PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
This class is very important if you want your data bound WPF presentation model to be able to automatically update the UI by just raising a PropertyChanged event. The WPF infrastructure will subscribe to this event for all data bound classes. The dispatcher part is used so that the event is always raised on the UI thread, this is powerful as you can set presentation model properties without having to think about which thread you are in. Example:
private void UpdateStats()
    Fitness = Math.Max(0, MaxFitness - model.EvolutionProcess.CurrentFitness);
    Generations = model.EvolutionProcess.Generations;
    SelectedGenerations = model.EvolutionProcess.SelectedGenerations;

public double Fitness
    get { return fitness; }
        fitness = value;
I was very impressed with Caliburn and how it makes WPF development easier. It allows you to move some of the code you would normally write in a code behind class or in a presenter directly into the presentation model and at the same time making this code easier to unit test. There are still scenarios that would require presenters but I think a majority of UI interactions could be handled using Caliburn in this way. There are more to Caliburn than I have mentioned in this post, so be sure to check it out yourself

Next Saturday there will be another ALT.NET unconference in Stockholm. As before this is an open conference by developers for developers. This time we will open with some lightning talks.

There are 8 lightning talks currently booked:

  1. Develop for IPhone, perspectives from a .NET developer – Christian Libardo
  2. Fight code rot – Petter Wigle
  3. Should we stop mocking – Emil Gustafsson
  4. OpenTK – Olof Bjarnason
  5. Context/Specification with MSpec – Joakim Sundén
  6. Object databases for .NET – Peter Hultgren
  7. Continues Integration a case study – Helen Toomik

The last ALT.NET conference was a great success so if you have a chance to attend be sure to sign up.

imageI have been playing with Silverlight the last few evenings, trying to port Roger Alsings "EvoLisa" application to Silverlight. This application is very cool, it uses a genetic algorithm to create an image composed of polygons that resembles a target image (that you can choose). I have been thinking of doing something like that for some time. I am very interested in artificial life and evolution simulations, especially after reading so many books on evolution by Richard Dawkins and others.

The EvoLisa application is written in WebForms and uses the GDI Graphics object to draw polygons to a bitmap surface, it then does pixel level comparisons to check how close this generated image is to the target image. This proved to be very difficult to port to Silverlight as it was not possible access the pixel buffer of Silverlight WPF controls and surfaces. After hours of googling for 2D or 3D graphícs libraries for Silverlight I gave up and implemented a standard scan line polygon fill algorithm. It is not the simplest of algorithms, especially if you want to support complex polygons.

Another issue was to to display the generated pixel buffer, this is also not possible. The only way to currently do it is through the Image control and encode you pixel buffer as a binary PNG stream which the Image control can then display.  Luckily Joe Stegman had already figured this out, so I did not need to write my own png encoder.

The last issue was also surprising, there is currently no way to read an image (client side) and access the pixels of that image. This is very easy to do server side with the Bitmap class, so instead of writing my own jpg/png decoder I send the target image to the server do the decoding there using the Bitmap class and  then return the pixels as an array of bytes. This is only required once so there is no significant performance penalty for doing this. Hopefully Silverlight 3 will add more low level graphics APIs.

You can see a screenshot to the right, this is just after two evenings of hacking, so the GUI is still very ruff. The performance is not completely on par with the WinForms version, probably because of the scan line polygon renderer is not as optimised as the GDI version (I think GDI is not hardware accelerated, right?).

Last night I took a break from the Silverlight version because I just had to try to do the same but using WPF and Direct3D to render the polygons, I got something working but I don't know the performance benefit yet as it is currently only rendering at the monitor refresh rate. You clearly notice how the Silverlight version slows down as more polygons are added, this is something that I hope the Direct3D version will eliminate.