image

I thought I would do a best of post, as many others seems to be doing it.

Most viewed posts (in order):

  1. IoC Container Benchmark - Unity, Windsor, StructureMap and Spring.NET
  2. NHibernate 2.0 Events and Listeners
  3. JQuery and Seperation of Concerns
  4. Cleanup your html with JQuery
  5. Breadcrumb menu using JQuery and ASP.NET MVC
  6. Url Routing Fluent Interface
  7. Creating a WatiN DSL using MGrammar
  8. View Model Inheritance
  9. NHibernate 2.0 Statistics and a MonoRail filter

image This was actually my first year of blogging. I have had ambitions to start blogging for years but just never got around to actually do it, mainly because of two reasons. First I thought that if I was going to start blogging I would need to setup a custom domain, fix hosting, install and configure some blogging software, etc. All that initial setup time was a road block to start blogging, I could just not find the time to do it because of a heavy work load and a lot of overtime. The second reason was that if I started blogging I wanted it to be a serious blog with good valuable content, and for that I felt that I needed to mature and gain some more experience.

But in retrospect I regret that I did not start sooner, during 2005-2007 I was a lead developer on a multi-tenant B2B system where I did some interesting work with url rewriting with WebForms (before there was much information about it), started using NHibernate and Castle Windsor, SAP integration logging using AOP, etc, in short I had a lot to blog about that could have been valuable and helpful to the .NET community.

I am very glad that I eventually started blogging because it has been a very fun, rewarding and learning experience. One of the reasons that I eventually started was the extremely easy setup that Google's blogger service provided where you could register a domain and start blogging in a mater of minutes. Google's blogger service has been pretty great as a way to get started, the only problem is the bad comment system and some lack of flexibility. I will probably be moving to a hosted solution where I can run and configure the blogging software myself during 2009 but right now it is not a top priority.

I want thank everyone who subscribes or reads this blog, I have been very pleasantly surprised by the amount of people who subscribe, it is very motivating to see that people find value in things I write and keeps me wanting to write more and better.

Merry Christmas & Happy New Year

If you have worked with DTS or SSIS packages you probably know that they can quickly become painful to work with, especially concerning versioning, logging and above all deployment.

I recently tried Ayende´s open source ETL framework RhinoETL, it tackles the ETL problem from a completely different angle compared to DTS / SSIS. At it's heart RhinoETL is a very simple .NET framework to handle an ETL process, the key components are processes, pipelines and operations. 

The process that I needed was a very simple one, namely to update a text table stored in multiple databases. The update could be of different types, for example swap every occurrence of a text translation to another and delete or update a specific row. In previously releases this was handled by writing manual update scripts. The release that I am working on currently however requires extensive changes to the texts in these tables spread over many databases and writing repetitive SQL scripts was not something that I felt doing. It felt like a good opportunity to try RhinoETL.

I began writing this input operation:

public class ReadWordList : InputCommandOperation
{
    public ReadWordList(string connectionStringName) 
      : base(connectionStringName) {  }

    protected override Row CreateRowFromReader(IDataReader reader)
    {
        return Row.FromReader(reader);                
    }

    protected override void PrepareCommand(IDbCommand cmd)
    {
        cmd.CommandText = "SELECT * FROM Wordlists";
    }
}

This is the first operation, its responsibility is to fill the pipeline with rows from the Worldlists table. The next operation is the one updating the rows. It takes as input a list of ITextChange instances that is the object that performs the change.

public class TextChangeOperation : AbstractOperation
{
    private IList<ITextChange> changes;
  
    public TextChangeOperation(IList<ITextChange> changes)
    {
        this.changes = changes;
    }

    public override IEnumerable<Row> Execute(IEnumerable<Row> rows)
    {
        foreach (var row in rows)
        {
            foreach (var change in changes)
            {
                if (change.IsValidFor(row))
                    change.Perform(row);
            }

            yield return row;
        }
    }
}

There is a class hierarchy representing the different types of text changes:

 

image

The code for the GeneralTextSwap class is very simple:

public class GeneralTextSwap : ITextChange
{
    public string TextOld { get; set; }
    public string TextNew { get; set; }

    public bool IsValidFor(Row row)
    {
        var text = ((string) row["Text"]).Trim();
        return text == TextOld;
    }

    public void Perform(Row row)
    {
        row["TextOld"] = row["Text"];
        row["Text"] = TextNew;
    }
}

The point of storing the old text value is that I have an operation after the TextChangeOperation that logs all changes to a csv file. The changes for a specific release is then just defined as static list on a static class. For example:

public class ReleaseChanges_For_Jan09
{
  public static IList<ITextChange> List;

  static ReleaseChanges_For_Jan09()
  {
    List = new List<ITextChange>()
    {            
      new GeneralTextSwap() 
      {
         TextOld = "some old link value",
         TextNew = "some new link value"
      },
      new UpdateTextRow()
      {
          DbName = "DN_NAME",
          WorldList = "USER_PAGE",
          Name = "CROSS_APP_LINK",
          TextNew = "New link value"
      },
      new DeleteTextRow()
      {
          DbName = "DN_NAME_2",
          WorldList = "SOME_PAGE",
          Name = "SOME_LINK"
      }
    }
  }
}

The above is just an example in reality I have hundreds of general and specific changes, and yes this is legacy system which handles text and links very strangely. The above code could easily be handled by a simple sql script with about the same number of lines of TSQL code, the benefit of placing this inside an ETL written in C# is that I can easily reuse the change logic over multiple databases and I also get great logging and tracing of all changes. The point of this post is to show how easy it can be to use OOP to model and abstract an ETL process using RhinoETL. Even though this ETL process i very simple it serves as good example for showing how a TSQL script, DTS or SSIS packages can be rewritten and simplified using the power of an object oriented language.

Fredrik Normén recently posted about how crosscutting concerns are implemented with boring and often duplicated code, which can be handled by using aspect oriented programming (AOP). I have used AOP with great success in previous projects to handle scenarios like logging, transactions and change tracking.

The problem with AOP is that it is not well supported or integrated into the CLR or the .NET framework and toolset. Sure there are great AOP frameworks that solve the problem via dynamic proxies, but such solutions have some big restrictions, for example they only work with virtual functions and you need to instantiate the object via a proxy creator. PostSharp handled AOP differently, it is a framework and a .NET post compiler that injects your crosscutting concerns at compile time. The problem I have found with PostSharp is that the post compile step is kind of slow, which is an issue if you do TDD (because you are constantly recompiling). Beside the compile time issue PostSharp is great tool and framework with great flexibility and power.

I was kind of disappointed that Microsoft didn't present any plan to make AOP scenarios easier and more natural on the .NET platform at this years PDC. I am not sure how, but I just feel that the AOP experience on .NET could be vastly improved :)

A couple of weeks ago I was invited to a presentation by Scott Ambler on "Scaling Agile Software Development: Strategies for Applying Agile in Complex Situations", it was an interesting talk. He showed a lot of diagrams and stats from a 2008 Agile Adoption rate survey, you can find the info here. The survey results are for example agile adoption rates, success rates, increase / decrease in code quality and costs. The problem with the survey is that it does not include any definition of what qualifies as agile. There are many different definitions on what makes software development agile, and many definitions like the manifesto (if you can even call that a definition) are very vague. I am not saying there is anything wrong with the manifesto or that agile needs to be better defined, what I am saying is that because agile can mean a lot of things to different people, and that many think they are doing agile while others might heartedly disagree, makes agile surveys like this very hard to interpret.

For example, the survey that covers modelling and documentation practices concludes that:

  • Agile teams are more likely to model than traditional teams.
  • Traditional teams are equally as likely to create deliverable documentation.
  • For all the talk in the agile community about acceptance test driven development, few teams are actually doing it in practice.
  • etc..

Without some minimum criteria for what qualifies as "doing agile" these surveys don't tell me that much. Even if I would disagree with the definition I think they would make surveys like this a lot more interesting.

I am not saying that the surveys are worthless, they do tell you something and they are a great resource for convincing management of agile practices so it is great that someone is taking the time to make them. 

image I am very interested in how natural languages work and evolve (see my review of the book The Language Instinct) and ever since I began playing with MGrammar I have wanted to see if it was possible to define English sentence structure using it.

Lets begin with a simple sentence:

The boy likes the girl.

This sentence is composed of the noun phrase (NP) "the boy" and a verb phrase(VP) "likes the girl". So lets begin with this MGrammar syntax:

syntax Main = S*;        
syntax S = NP VP ".";

If we look at the noun phrase "the boy", it is composed of determiner followed by noun, like wise if we look at the verb phrase "likes the girl" we see that it is composed of verb followed by a noun phrase. The MGrammar should then be:

syntax NP = Det? N;
syntax VP = V NP;

Then we just need to add some determiners, verbs and nouns:

syntax Det = "a" | "the" | "one";
syntax N = "boy" | "girl" | "dog" | "school" | "hair";
syntax V = "likes" | "bites" | "eats" | "discuss";

If you add an interleave rule to skip whitespace the sentence should be correctly parsed. That was a really simple sentence, lets add an adjective.

The nerdy boy likes the girl.

We need to modify the noun phrase rule. Before the noun an optional amount of adjectives (A*) can be placed. This is a simple change, just add A* to the noun phrase rule and add some adjectives.

syntax NP = Det? A* N;
syntax A = "happy" | "lucky" | "tall" | "red" | "nerdy";

That was simple, lets add something more to the sentence, for example:

The nerdy boy likes the girl from school with red hair .

I added a nested prepositional phrase (PP). A prepositional phrase is, according to wikipedia, composed of a preposition(P) and a noun phrase.

syntax NP = Det? A* N PP?;
syntax PP = P NP;
syntax P = "on" | "in" | "from" | "with";

The recursive nature of the PP phrase makes it possible to nest infinite number of prepositional phrases inside each other. Here is an illustration of the syntax tree for "girl from school with red hair":

image

I think I will stop here because this post is turning in to an English grammar lesson and I don't won't to loose all my subscribers :) Defining the English sentence structure in MGrammar is pretty pointless, unless you are building a grammar checker, in which case you are still out of luck as it will probably be impossible to define grammar for how words are built and you will run into trouble with ambiguity (which most natural languages have). But it was a fun try, and it is a good example for showing how recursive rules are parsed.

If you missed Martin Fowlers post on Oslo, it is a good read, I like how he defines it as a Language Workbench.

PS. I have started twittering, I know am late to the game, I just didn't get the point of twitter. I have been using it for a two days now and I am beginning to see the light. Oh, and please skip pointing out the irony with the inevitable grammatical errors in this post :)

imageMy main area of interest in programming during my school and university years was in computer graphics. I did a lot of hobby programming with games, like the Quake 3 mod Rocket Arena 3, and specifically with fractal graphics. This generated a quite serious outcome during a course on research methodologies where I together with a friend wrote a scientific paper on 4D Julia rendering.

The 2D Julia fractal to the left is a shape that everyone recognizes. The simple formula that defines the Julia and Mandelbrot set is also something that many know:

image

The variable z and constant c are in a normal 2D Julia set complex numbers. A complex number is a tuple of two values, a real and imaginary part, which can be visualized in the complex pane, which is a 2D geometric representation of complex numbers. 

image

Complex numbers can be extended to a four-tuple value with one real and three imaginary units. These are called Quaternions and can be visualized as a four dimensional space. When using the Julia equation with Quaternions we can define a 4D dimensional object. It becomes a lot trickier to visualize however. What is needed is a 4D camera from which you can cast rays (vectors). The algorithm traverse these vectors step by step (from the camera) and for each point the Julia equation is recursively run to determine if the point is part of the Julia set or not.

Here is 2D illustration of how a ray is thrown and sampled at regular intervals:

image

The process of sampling each point can be very time consuming so the distance between sample points can not be to small, but it can not be the to high either as it directly relates to the point resolution you get as exemplified by these two images:

image

The left image has a higher distance between sample points compared to the image to the right. You can clearly see the artifact of how the algorithm works in the rings in the image to the left (each ring is a sample point along the ray). To optimise the sampling process we can move back after a sample hit and then move forward in smaller steps.

image

Here is the Julia set viewed from the side, the constant (Julia seed) is (0.4 0.5i 0j 0k).

image

The raytracing gives coordinates in a 4D space that belongs to the Julia set, each point can be given a color depending on how far the point is from the camera (as above). Further each point can be lighted and shaded. To accurately light a point you need it's surface normal, these normals can be calculated by looking at the relative heights of neighbouring points (pixels).

image

The most fascinating aspect of 4D Julia sets can be viewed when you animated them. To animate them you render pictures with small change to the Julia seed constant or to the camera position. What is particularly interesting is when you move the camera in the fourth dimension. What you get is an animation of how the Julia set looks like from different four dimensional angles. It is hard to describe what happens, but basically the shape morphs into different shapes when viewed from different four dimensional angles :)

During the process of coding our raytracer we accidentally made an error in the quaternion multiplication, which resulted in a very strange shape:

image

We named these über quaternions. Later when we were writing the paper we discovered that we had used commutative multiplication rules which resulted in hypercomplex numbers.

Here is another picture showing how the shape gets more detail depending on the number of times you recurse over the Julia algorithm:

image

Isn't it fascinating and beautiful how such a simple algorithm can define such an intricate structure? The raytracer was witten in C++ and my plan is to (someday, maybe next year) port / rewrite it in F# :)

The paper is very technical so I doubt that it will interest many, but here is the link and abstract:

By using rules defined by quaternion algebra the Julia set can be extended to four dimensions. Techniques for visualizing the four dimensional Julia set as a three dimensional object has previously been explored, however those techniques usually ignores the fourth dimension. In this paper we describe our attempt to extend the already established technique of ray tracing Julia sets to fully incorporate its four dimensional properties. We also discuss an optimisation algorithm that drastically increases the amount of details in images and shortens rendering time.

Link to the paper: Raytracing4D.pdf

Movies:

  1. 4D Camera move: 4d_cameramove.avi
  2. Another 4D Camera move: r4d.avi
  3. Moving the seed in circle (on the complex pane): cycle.avi
  4. Moving the seed from left to right (on the complex pane): cwalk.avi
  5. Moving the camera in a circle around the hypercomplex julia set:  uber_highres.avi

It is worth pointing our that in the two first movies with 4D camera movies it is only the camera that is moving the julia seed never changes, only the angle that the object is viewed at. The paper was written with LaTeX, which was a lot of fun to learn, although quite hard to use. But the output looks so much more professional.

Were you also a graphics geek? Did you write a flame or plasma effect in assembler, did you use allegro to write DOS games? I sure did :)

My first try with writing a DSL for WatiN was such fun experience that I decided to have another go. I wanted to try to create something with a little more natural sentence like syntax. Here are some tests showing of the new syntax:

--- Can filter by author ---
goto address "http://demo.codesaga.com/". 
click the link with the text "MvcContrib".
select "torkel" from the list with the id #author-fitler.
page should contain the text "Filtering view by author torkel".

--- Can filter by date --- 
goto address "http://demo.codesaga.com". 
click the link with the text "xUnit".
set focus to the textbox with the id #date-filter.
click the link with the text "2".
page should contain the text "Filtering view by date".

--- Can expand diff in changsest view (via ajax) ---
goto address "http://demo.codesaga.com/history/xUnit?cs=25434".
click the element with the class name @cs-item-diff.
page should contain the element with the class name @code-cell, wait for
it 3 seconds.

The added verbosity might be to much for programmers but the point of making something like this more readable is to make acceptance tests understandable by non-programmers. I am not saying that acceptance tests is something that shouldn't involve developers. But having them accessible to for non-programmers can be very valuable. I am not sure why, I kind of like the verbosity in this case (I usually don't). It would be very easy to make some words optional so one can write "click #edit" as a shortening of "click the link with the id #edit".

The MGrammar for this language:
module CodingInstinct {
    import Language;
    import Microsoft.Languages;
    export BrowserLang;
 
    language BrowserLang {
                  
        syntax Main = t:Test* => t;
        
        syntax Test = name:TestName
            a:ActionList => Test { Name { name }, ActionList { a } };
                                      
        syntax ActionList
          = item:Action => [item]
          | list:ActionList item:Action => [valuesof(list), item];
                             
        syntax Action = a:ActionDef "." => a;
        syntax ActionDef
            = a:GotoAction => a
            | a:ClickAction => a
            | a:SelectAction => a
            | a:TextAssert => a
            | a:ElementAssert => a
            | a:TypeAction => a
            | a:SetFocusAction => a;
                            
        syntax GotoAction = "goto" "address"? theUrl:StringLiteral
            => GotoAction { Url { theUrl } };
                
        syntax ClickAction = "click" "the"? ("link" | "element")? ec:ElementConstraint 
            => ClickAction { Constraint { ec } };
        
        syntax TypeAction = "type" value:StringLiteral "into" "the" "textbox" ec:ElementConstraint
            => TypeAction { Value { value> }, Constraint { ec } };
            
        syntax SelectAction = "select" value:StringLiteral "from" "the" "list" ec:ElementConstraint
            => SelectAction { Value { value }, Constraint { ec } };
        
        syntax TextAssert = "page should contain" "the" "text" text:StringLiteral 
            => TextAssert { Value { text } };
            
        syntax ElementAssert = "page should contain" "the" "element" ec:ElementConstraint 
            wait:ElementWait?
            => ElementAssert { Constraint { ec }, Wait { wait } };
            
        syntax ElementWait = "wait" "for" "it" sec:Base.Digits ("second" | "seconds") 
            => sec;
            
        syntax SetFocusAction = "set" "focus" "to" "the" "textbox" ec:ElementConstraint 
            => SetFocusAction { Constraint { ec } };
                 
        syntax ElementConstraint
            = "with" "the" "text" name:StringLiteral => TextConstraint { Value { name } } 
            | "with" "the" "id" name:ElementId => IdConstraint { Value { name } } 
            | "with" "the" "class" "name" name:ElementClass => ClassConstraint { Value { name } };
                  
        token TestName = "--- " (Base.Letter|Base.Whitespace)+ " ---";    
        token ElementId = '#' (Base.Letter|'-'|'_')+;
        token ElementClass = '@' (Base.Letter|'-'|'_')+;
                                                    
        interleave Skippable
          = Base.Whitespace+ 
          | Language.Grammar.Comment
          | Base.NewLine
          | ",";
                
        syntax StringLiteral
          = val:Language.Grammar.TextLiteral => val;        
    }

}

Another improvement in this new DSL syntax is the format for specifying an element or class name. In the above grammar these are defined as tokens, where ids begin with # and class names with @ followed by any word. The nice thing with a token is that you can add a Classification attribute to it where you specify what token category it belongs to. Classification names are linked to font and color styles (i.e. syntax highlighting).

image 

To get the DSL to actually execute you need the MGraph node tree that the parser spits. The MGraph is not something that you want work with directly as it is pretty low level. When I did the first version of this WatiN DSL I spent the majority of the time figuring out how to parse and deserialize the MGraph into a custom set of AST classes. In the process I wrote a very basic generic MGraph -> .NET classes deserializer.

Luckily, as Don Box pointed out in the comments to my previous post, SpankyJ has written a much better deserializer that converts the MGraph into Xaml via an MGraphXamlReader. It was very easy to switch to his implementation as he had some useful method extensions on the DynamicParser.

Example:

DynamicParser parser = LoadExampleGrammar();

var xamlMap = new Dictionary<Identifier, Type>
    { { "Person", typeof(Person) } };

var people = parser.Parse<List<object>>(testInput, xamlMap);

But having to define the mapping between MGraph node names and .NET classes manually like this was something I did not like. I wanted something with a more convention based approach. Roger Alsing is also doing some work with MGrammar and he gave me this great piece of code which I modified slightly:

public Dictionary<Identifier, Type> GetTypeMap()
{
  return Assembly
    .GetExecutingAssembly()
    .GetTypes()
    .Where(t => t.Namespace.StartsWith("WatinDsl.Ast"))
    .Where(t => !t.IsAbstract)
    .ToDictionary
    (
      t => (Identifier)t.Name,
      t => t
    );
}

Pretty simple code really,  it just creates a dictionary of all the non abstract types in the namespace WatinDsl.Ast.

I basically rewrote the AST for this new version, now most actions have a Constraint property that determines what element the action is targeting. Here is an sample:

public class ClickAction : IAction
{
    public IElementConstraint Constraint { get; set; }
    
    public void Execute(IBrowser browser)
    {
        browser.Element(Constraint.Get()).Click();
    }
}

public class IdConstraint : IElementConstraint
{
  public string Value { get; set; }

  public AttributeConstraint Get()
  {
    return Find.ById(Value.Substring(1));
  }
}

For the full code: WatinDsl_2.zip

This is still just an experimental spike for learning MGrammar, but it is also an interesting scenario for exploring the potential in a browser automation language. Is a browser automation language, like the one I have created, something that you would find useful? What would your syntax look like?

Working with MGrammar in Intellipad's split view in fullscreen on a 26" monitor is pure joy :)

image

To change the default color schema in intellipad is easy, just modify the Intellipad\Settings\ClassificationFormats.xcml file. It looks like this:

<act:Export Name='{}{Microsoft.Intellipad}ClassificationFormat'>
  <ls:ClassificationFormat Name='Unknown' 
                           FontSize='13' 
                           FontFamily='Consolas' 
                           Foreground='#FFEEEEEE' />
</act:Export>
<act:Export Name='{}{Microsoft.Intellipad}ClassificationFormat'>
  <ls:ClassificationFormat Name='Numeric' 
                           Foreground='#FFEEEEEE' />
</act:Export>

///....

The hard part was to figure out how to change the background color, which currently can't be done by changing some xml config file but can be accomplished with a small python snippet.

@Metadata.CommandExecuted('{Microsoft.Intellipad}BufferView', 
        '{Microsoft.Intellipad}SetBlackBackground', 
        'Ctrl+Shift+F2')
def SetBlackBackground(target, bufferView, args):
   bufferView.TextEditor.TextView.Background = System.Windows.Media.Brushes.Black

I got this snipped from Vijaye Raji (SUPER NINJA). You got to digg someone with SUPER NINJA in their Microsoft email display name :)

image

If you are attending the Øredev conference this week, be sure to checkout the ALT.NET track. First out is Joakim Sundén with "ALT.NET - Are you ready for the Red Pill?", I just reviewed his slides and it's a great introduction to ALT.NET, giving both a detailed background and description of the term without being divisive or elitist. 

I am not attending Øredev this year, I wish I was, besides the ALT.NET track their is a terrific DDD track with presenters like Jimmy Nilsson and Eric Evans. Robert C. Martin is also presenting a keynote and a talk on clean code.

The Cornerstone event Pimp My Code was finally announced today (after some series of delays). I was originally scheduled to talk on Dependency Inversion Principle and Inversion of Control Containers, but due to the delays and the reorganizing from single day conference to a small evening event the schedule was reworked, so I will not be doing the talk. Instead I will hopefully get the chance to perform the talk at Developer Summit next year (probably in mars).

I get mails from time to time asking for my visual studio color scheme settings. The code examples on this blog use the same color scheme I use in visual studio, which is a scheme I tweaked together about 2 years ago after reading this post by Jeff Atwood, in that post he had this screenshot:

 image

I really liked how this looked, so I spent the next weak creating and constantly tweaking a code and html color scheme inspired by the above image (I did manage to get some coding done as well).

This is the result:

image

This is how xml looks like:

image

I was really happy with how the xml color scheme turned out :)

So If you like it:

Updated: Now the files only contain "Fonts & Color" settings.

The only part of the Oslo presentations at PDC that caught my attention was the MGrammar language (Mg).

The Mg language provides simple constructs for describing the shape of a textual language – that shape includes the input syntax as well as the structure and contents of the underlying information

The interesting part of Mg is how it combines schema, data transformation, and functional programming concepts to define rules and lists. Creating and designing a language is hard and requires some knowledge of how parsers work, as Frans Bouma, and Roger Alsing has pointed out, Mg and Oslo is not going to change that. I haven't work professionally with language parsers, I have written a C like language compiler using LEX and YACC, but that was many years ago. One of the most popular tools for language creation today is ANTLR, it would be great if someone knowledgeable in both ANTLR and Mg would write a comparison.

Anyway, I was intrigued by Mg so I decided to play around with it. I decided to create a simple DSL over the WatiN browser automation library. I wanted to be able to execute scripts that looked like this:

test "Searching google for watin"
    goto "http://www.google.se"
    type "watin" into "q"
    click "btnG"
    assert that text "WatiN Home" exists
    assert that element "res" exists
end

Maybe not the best possible DSL for browser testing, one could probably come up with something even more natural sounding. But it will be sufficient for now. To start creating the language specification I started Intellipad (a application that is included in the Oslo CTP). To get the nice three pane view, with input, grammar, and output window is kind of tricky. First switch the current mode to MGrammarMode, this is done by pressing Ctrl+Shift+D to bring up the minibuffer, then enter "SetMode('MGMode')". Now the MGrammar Mode menu should be visible, from this menu select "Tree Preview", this will bring up a open file dialog, in this dialog create an empty .mg file and select that file.

image

I entered my goal DSL in the dynamic parser window and began defining the syntax and data schema. After an hour of trial and error I arrived at this grammar:

module CodingInstinct {
    import Language;
    import Microsoft.Languages;
    export BrowserLang;
 
    language BrowserLang {
                  
        syntax Main = t:Test* => t;
        
        syntax Test = TTest name:StringLiteral a:ActionList TEnd
            => Test { Name { name }, a };
                       
        syntax ActionList
          = item:Action => ActionList[item]
          | list:ActionList item:Action => ActionList[valuesof(list), item];
                             
        syntax Action 
            = a:GotoAction => a
            | a:TypeAction => a
            | a:ClickAction => a
            | a:AssertAction => a;
            
        syntax GotoAction = TGoto theUrl:StringLiteral => GotoAction { Url { theUrl } };
        syntax TypeAction = TType text:StringLiteral TInto id:StringLiteral 
             => TypeAction { Text { text }, ID { id } };
        
        syntax ClickAction = TClick id:StringLiteral => ClickAction { ID { id } }; 
        syntax AssertAction = 
            TAssert TText text:StringLiteral TExists => AssertAction { TextExists { text } }
          |
            TAssert TElement element:StringLiteral TExists => AssertAction { ElementExists { element } }           ;
        
        @{Classification["Keyword"]} token TTest = "test";            
        @{Classification["Keyword"]} token TGoto = "goto";
        @{Classification["Keyword"]} token TEnd = "end";
        @{Classification["Keyword"]} token TType = "type";
        @{Classification["Keyword"]} token TInto = "into";
        @{Classification["Keyword"]} token TClick = "click";
        @{Classification["Keyword"]} token TAssert = "assert that";
        @{Classification["Keyword"]} token TExists = "exists";        
        @{Classification["Keyword"]} token TText = "text";        
        @{Classification["Keyword"]} token TElement = "element";
        
        interleave Skippable
          = Base.Whitespace+ 
          | Language.Grammar.Comment;       
                
        syntax StringLiteral
          = val:Language.Grammar.TextLiteral => val;        
    }

}

I have have no idea if this is a reasonable grammar for my language or if it can be written in a simpler/smarter way. The grammar generates this M node graph:

[
  Test{
    Name{
      "\"Search google for watin\""
    },
    ActionList[
      GotoAction{
        Url{
          "\"http://www.google.se\""
        }
      },
      TypeAction{
        Text{
          "\"asd\""
        },
        ID{
          "\"google\""
        }
      },
      ClickAction{
        ID{
          "\"btnG\""
        }
      },
      AssertAction{
        TextExists{
          "\"text\""
        }
      },
      AssertAction{
        ElementExists{
          "\"asd\""
        }
      }
    ]
  }
]

The problem I had now was how to parse and execute this graph, I could not find any documentation for how to generate C# classes from M schema. What is included in the CTP is a C# library to navigate the node graph that the language parser generates. This node graph is not very easy to work with, I wanted a GotoAction to be automatically mapped to a GotoAction class, the TypeAction to a TypeAction class, etc. To accomplish this I wrote a simple M node graph deserializer.

This is the AST I want the M node graph to deserialize to:

public class Test
{
  public string Name { get; set; }
  public IList<IAction> ActionList { get; private set; }

      public Test()
      {
          ActionList = new List<IAction>();
      }
}

public interface IAction
{
    void Execute(IBrowser browser);
}

public class GotoAction : IAction
{
    public string Url { get; set; }

    public void Execute(IBrowser browser)
    {
        browser.GoTo(Url);
    }
}

It was quite tricky to write a generic deserializer, mostly because the M node object graph is kind of weird (Nodes, Sequences, Labels, Values, EntityMemberLabels, etc). Here is the code:

public class MAstDeserializer
{
    private GraphBuilder builder;

    public MAstDeserializer()
    {
        this.builder = new GraphBuilder();
    }

    public object Deserialze(object node)
    {
        if (builder.IsSequence(node))
        {
            return DeserialzeSeq(node).ToList();
        }

        if (builder.IsNode(node))
        {
            return DeserialzeNode(node);
        }

        return null;
    }

    private object DeserialzeNode(object node)
    {
        var name = builder.GetLabel(node) as Identifier;

        foreach (var child in builder.GetSuccessors(node))
        {
            if (child is string)
            {
                return UnQuote((string)child);
            }
        }

        var obj = Activator.CreateInstance(Assembly.GetExecutingAssembly().FullName, "WatinDsl.Ast." + name.Text).Unwrap();
        
        InitilizeObject(obj, node);
        
        return obj;
    }

    private void InitilizeObject(object obj, object node)
    {
        foreach (var child in builder.GetSuccessors(node))
        {
            if (builder.IsSequence(child))
            {
                foreach (var element in builder.GetSequenceElements(child))
                {
                    AddToList(obj, child, element);
                }
            }
            else if (builder.IsNode(child))
            {
                obj.SetPropery(builder.GetLabel(child).ToString(), DeserialzeNode(child));
            }
        }
    }

    private void AddToList(object obj, object parentNode, object element)
    {
        var propertyInfo = obj.GetType().GetProperty(builder.GetLabel(parentNode).ToString());
        var value = propertyInfo.GetValue(obj, null);
        var method = value.GetType().GetMethod("Add");
        method.Invoke(value, new[] { DeserialzeNode(element) });
    }

    private IEnumerable<object> DeserialzeSeq(object node)
    {
        foreach (var element in builder.GetSequenceElements(node))
        {
            var obj = DeserialzeNode(element);
            yield return obj;
        }
    }

    private object UnQuote(string str)
    {
        return str.Substring(1, str.Length - 2);
    }
}

I guess in future versions of Oslo previews something like the above deserializer will be included as it is essential for creating executable DSLs. Maybe the Oslo team has another option for doing this, for example generating Xaml from the node graph which can then initialise your AST.

So how do we compile and run code in our new WatiN DSL language? First we need to compile the grammar .mg file into a .mgx file, this is done with the MGrammarCompiler, we can then use the .mgx file to create a parser, the parser will generate a node graph which we will deserialize into our custom AST.

public class WatinDslParser
{
  public object Parse(string code)
  {
    return Parse(new StringReader(code));
  }

  public object Parse(TextReader reader)
  {
    var compiler = new MGrammarCompiler();
    compiler.FileNames = new[] { "BrowserLang.mg" };
    compiler.Target = Target.Mgx;
    compiler.References = new string[] { "Languages", "Microsoft.Languages" };
    compiler.Execute(ErrorReporter.Standard);

    var parser = MGrammarCompiler.LoadParserFromMgx("BrowserLang.mgx", "CodingInstinct.BrowserLang");

    object root = parser.ParseObject(reader, ErrorReporter.Standard);

    return root;
  }
}

The reason I compile the grammar from code every time I run a script is so I can easily change the grammar and rerun without going through a separate compiler step. The Parse function above returns the M node graph. Everything is glued together in the WatinDslRunner class:

public class WatinDslRunner
{
  public static void RunFile(string filename)
  {
          var parser = new WatinDslParser();
          var deserializer = new MAstDeserializer();

          using (var reader = new StreamReader(filename, Encoding.UTF8))
          {
              var rootNode = parser.Parse(reader);
              var tests = (IEnumerable)deserializer.Deserialze(rootNode);

              foreach (Test test in tests)
              {
                  RunTest(test);
              }
          }
  }

      public static void RunTest(Test test)
      {
          Console.WriteLine("Running test " + test.Name);
          using (var browser = BrowserFactory.Create(BrowserType.InternetExplorer))
          {
              foreach (var action in test.ActionList)
              {
                  action.Execute(browser);
              }
          }
      }
}

If you have problems with the code above, please remember that the code in this post is just an experimental spike to learn MGrammar and the M Framework library. If you want to experiment with this yourself, download the code+solution: WatinDsl.zip.

Summery and some other thoughts on Oslo/Quadrant

It was quite a bit of work going from my textual DSL to something executable. The majority of the time was spent figuring out the M node graph and how to parse and deserialize it, writing the grammar was very simple. MGrammar will definitely make it easier to create simple data definition languages that could replace some existing xml based solutions, but I doubt that it will be widely used in enterprise apps for creating executable languages. Maybe it is more suited for tool and framework providers. It is the first public release so a lot will probably change and be improved so it is to early to say how much of an impact M/Oslo will have for .NET developers.

I got home from PDC quite puzzled over Oslo and the whole model-driven development thing. They only talked about data, data, data, I don't think they mentioned the word BEHAVIOR even once during any Oslo talk that I attended, to me that is kind of important :) I asked others about this and most agreed that they did not understand the point of Oslo, or how it would improve/change application development significantly.

Sure I found Quadrant to be a cool application that could potentially replace some Excel / Access solutions but what else? In what way is Quadrant interesting for application developers?  It would be interesting to get some comments on what others think about MGrammar, Quadrant & Model-driven development :)

The keynote with Scott Gu just ended and I am sitting in the keynote hall waiting for the second keynote.

The most interesting points:

  • Visual studio 2010 will use WPF, the UI will not only look and work better but most of all will be easier to extend using MEF. Scott showed how to create a really nice visualization of method xml comments by extending the code editor, this was done implementing a very simple interface, exposing the class using the MEF Export attribute and drop the assembly in a components visual studio directory, no need to register components in the registry anymore.
  • Windows 7 will contain some actual usability improvements not only a new glass look :)
    • They showed it running on a netbook with one 1GB ram where there the OS used 512 MB, seems they will be optimising it for this scenario. 
  • Live Mesh seems interesting, Office 14 seems to use it a lot.
DSC00089

 

The second keynote with Don Box has begun and I better listen...

Updated with more pictures:

Picture 059

Picture 051

Pretty big room. Those gray rectangles hanging from the roof is projector screens for those in the back.

Picture 028

I am the guy in the black t-shirt.

Picture 045

Out last night at Tailors steak house, from left to right, Joakim  Sundén, Patrik Löwendahl, Magnus Mårtensson.

If you are running an ASP.NET MVC application under IIS6 you will need an extension in the URL in order for the request to be handled by the ASP.NET runtime. You can configure IIS so that all requests, no matter what extension, is handled by the aspnet_isapi filter but if you do this your application must handle content request as well (like css and image files). The best way to handle extensionless urls under IIS6 is to use Helicon Tech’s ISAPI_Rewrite 3. This isapi filter will rewrite the url before it is being processed, in effect it takes an extensionless url and rewrites it into an url with an extension. 

ISAPI_Rewrite is a commercial product, however there is a free lite version that works really well (it has some limitations). In ISAPI_Rewrite you write the rewrite rules using regular expressions:

RewriteEngine on
RewriteBase /

RewriteRule ^Home/(.*?)$ Home.mvc/$1 

Charles Vallance has written an excellent post on extensionless urls with ASP.NET MVC, the problem with his solution is that it requires that each route rule be duplicated, one with extension and one without.

routes.MapRoute(
  "Default", 
  "{controller}/{action}/{id}", 
  new { controller = "Home", action = "Index", id = "" } 
);

routes.MapRoute(
  "Default", 
  "{controller}.mvc/{action}/{id}", 
  new { controller = "Home", action = "Index", id = "" } 
);

The reason why you need two rules is that one is used for inbound urls (which after isapi_rewrite have an extension), the other is for outbound urls so that generated links from helpers are extensionless. This duplication of route rules might not be a big problem if you only use the default routing schema but if you use a lot of specific routes you want a better way to declare your routes so you do not need to duplicate them.

I have blogged previously about the route fluent interface I created for CodeSaga. I extended this further to handle this route duplication. In the url definition I only need to place a marker where the extension is going to be:

SagaRoute
  .MappUrl("admin$/repository/edit/{reposName}")
  .ToDefaultAction<RepositoryAdminController>(x => x.Edit(null))
  .AddWithName(RouteName.EditRepository, routes);

The actual route duplication is handled by the AddWithName function:

public SagaRoute AddWithName(string routeName, RouteCollection routes)
{
    var clone = this.Clone();
    Url = Url.Replace("$", "");

    if (ShouldAddExtensionlessRoute())
    {
        routes.Add(routeName, this);
        routes.Add(routeName + ".mvc", clone);
    }
    else
    {
        routes.Add(routeName, clone);
    }

    return this;
}

public SagaRoute Clone()
{
    var clone = new SagaRoute(Url.Replace("$", ".mvc"));

    foreach (var pair in Defaults)
        clone.Defaults.Add(pair.Key, pair.Value);

    foreach (var pair in Constraints)
        clone.Constraints.Add(pair.Key, pair.Value);

    return clone;
}

private bool ShouldAddExtensionlessRoute()
{
    return RuntimeContext.Config.UrlExtensionMode == UrlExtensionMode.WithoutExtension;
}

In the code above I clone the route, replace the dollar sign with ".mvc" and remove the dollar from the original route. The extensionless route must be added before the one with the extension, this is because it needs precedence when generating the outbound urls. There is also a setting that controls if the extensionless route is added at all, this is for IIS6 users that don't want to bother with isapi_rewrite. 

I hope this comes in handy if you are working on an ASP.NET MVC application that needs to support IIS6 and IIS7 in extension and extenionless modes.

If you are new to MVC web development it can initially be tricky to figure out how to handle UI features that will be used by multiple views. In WebForms you would simply create a control that encapsulated this element's look and function. In CodeSaga there are many views that share view elements, for example many views have tabs. In order to handle the common view elements I organized the hard typed view models into an inheritance hierarchy.

image

The above class diagram shows a subset of the view models in CodeSaga. The ViewModelBase exposes a list of MenuTabs and a method AddTabs that inheritors can use to add menu tabs. Here is the code for the RepositoryContextViewModel, the base class for all views that have the history, browse, search and authors tabs.

public class RepositoryContextViewModel : ViewModelBase
{
    public RepositoryUrlContext UrlContext { get; set; }
    
    public RepositoryContextViewModel()
    {
      AddTabs(
        MenuTab
          .WithName(MenuTabName.History)
          .ToAction<HistoryController>(x => x.ViewHistory("", null, null)),
        MenuTab
          .WithName(MenuTabName.Browse)
          .ToAction<HistoryController>(x => x.ViewBrowse("")),
        MenuTab
          .WithName(MenuTabName.Search)
          .ToAction<SearchController>(x => x.Search("")),
        MenuTab
          .WithName(MenuTabName.Charts)
          .ToAction<ChartsController>(x => x.ViewChart("")),
        MenuTab
          .WithName(MenuTabName.Authors)
          .ToAction<AuthorsController>(x => x.ViewStats("")));
    }    
}

The AdminViewBase class defines the admin tabs in a similar way. The view code that then renders the tabs is very simple:

<ul class="menu">
  <for each="var tab in tabs">        
    <li class="on?{tab.IsActive}">
      ${Html.ActionLink(tab.Text, tab.Action, tab.Controller)}
    </li>
  </for>        
</ul>

image

The only job left to do in the controller action is to set the currently active tab, like this:

public ActionResult ViewDiff(string urlPath, int? r1, int? r2)
{
  var urlContext = RepositoryUrlContext.FromString(urlPath);
      
  var diff = repository.GetFileDiff(urlContext.ReposName, urlContext.Path, r1.Value, r2.Value);

  var model = new DiffViewModel
  {
    UrlContext = urlContext,
    FileDiff = diff,
    ActiveTabName = MenuTabName.History
  };

  return View("Diff", model);
}

Setting the active tab like this could be refactored to be handled in a more declarative way, for example with an attribute on the controller class or action method. But I haven't found the need to do that yet as I think it's pretty declarative as it is.

One can question why I have used a MenuTab presentation model at all, why not define the tabs directly in the views? Since the tabs are only declared statically you could achieve a similar result using a hierarchy of partial views. The reason I did not choose that solution is because I actually needed (or wanted) the solution to support creating tabs dynamically in an easy way. This is used in CodeSaga when you click the edit link in the repository list (in the admin), this will open the edit view in a new tab.

image

The controller action for this:

private ActionResult Edit(string reposName)
{
    var model= new RepositoryEditViewModel();
    model.Repository = reposRepository.GetByName(name);
    
    model.AddTabs(
      MenuTab
        .WithName("Edit " + reposName)
        .ToAction<RepositoryAdminController>(x => x.Edit(reposName))
        .SetActive());
        
    return View(model);
}

It is worth to point out that the term View Model in this post should not be confused with Presentation Model. Instead I see it as a sort of container data structure for all the domain and presentation model data a specific view need.

I found a good use for the InlineData xUnit attribute:

public class DateDiffCalculatorFacts
{
    [Theory,
    InlineData("2008-09-25 14:31", "1 minute"),
    InlineData("2008-09-25 15:05", "35 minutes"),
    InlineData("2008-09-25 16:05", "1 hour 35 minutes"),
    InlineData("2008-09-25 17:05", "2 hours 35 minutes"),
    InlineData("2008-09-25 19:05", "4 hours"),
    InlineData("2008-09-25 02:30", "12 hours"),
    InlineData("2008-09-24 00:00", "1 day 14 hours"),
    InlineData("2008-09-23 00:00", "2 days 14 hours"),
    InlineData("2008-09-20 00:00", "5 days"),
    InlineData("2008-08-20 00:00", "1 month 5 days"),
    InlineData("2008-07-01 00:00", "2 months 24 days"),
    InlineData("2008-05-01 00:00", "4 months"),
    InlineData("2007-07-20 00:00", "1 year 2 months"),
    InlineData("2005-07-20 00:00", "3 years")]
    public void Can_get_correct_age_for_date(string dateString, string expectedAge)
    {
        var date = DateTime.Parse(dateString, CultureInfo.InvariantCulture, DateTimeStyles.None);
        var referenceDate = DateTime.Parse("2008-09-25 14:30:00", CultureInfo.InvariantCulture, DateTimeStyles.None);

        var calc = new DateDiffCalculator(date, referenceDate);

        string age = calc.ToString();
        Assert.Equal(expectedAge, age);
    }
}

This is the first time I have used this style of testing (called RowTests in MbUnit/NUnit). Maybe in this case it is not the right thing to do from a pure TDD perspective as the test name is not very descriptive of what is being tested. If each InlineData was extracted to a sepereate test they could be given more meaningfull names like "Measure_will_be_in_plural_when_more_than_one" and "Skip_sub_measure_when_main_measure_is_higher_than_two". On the otherhand having the test like above made it dead simple and fast to add new test cases and it still quite apparenent what the intended outcome is.

image There are currently 188 sessions for this years PDC and they are still adding new sessions! I am one of the lucky ones that will be going and I can't wait, there are so many interesting sessions and after conference parties :)

Here are some sessions I am looking forward to:

  • The Future of C# - Anders Hejlsberg
  • Deep Dive: Dynamic Languages in Microsoft .NET - Jim Hugunin
  • IronRuby: The Right Language for the Right Job - John Lam
  • Microsoft .NET Framework: CLR Futures
  • Managed Extensibility Framework: Overview - Glenn Block
  • Under the Hood: Advances in the .NET Type System

I have high hopes for Anders Hejlsberg talk on the future of C#, I remember his talk at PDC 2005 where he presented LINQ,  who knows what cool stuff he might unveil this time.

Many of the sessions cover Oslo, I am not that interested in Oslo, especially after the information that has been released about Oslo describing it as "Microsoft's data-centric platform, which is aimed at empowering nondevelopers to build distributed applications". There are also a ton of sessions on Windows 7, Silverlight and Visual Studio.

Anyway, I hope to meet some fellow bloggers and .NET enthusiasts there :)

image I just completed Robert C. Martin latest book Clean Code "A Handbook of Agile Software Craftsmanship", which was a good read but far from as good as his Agile Principles, Patterns and Practices book which I rank among my favorite programming books. If you haven't read it stop reading this and go read it! No just kidding stay..  

Clean Code begins with an interesting chapter where the concept of clean code is defined. The chapter includes about a dozen of quotes from famous programmers who are asked to define what clean code means to them. They were all good but I especially liked Michael Feathers definition (author of working effectively with legacy code):

/.../ Clean code always looks like it was written by someone who cares. /.../

I think this is very true, if you really care about the code you write you won't leave it in a messy state but continue to refactor it until your are satisfied.

So why clean code? Well Martin argues, correctly, that the time spent reading vs writing code is very high, and that we are constantly reading old code in order to write new code. So making it easier to read will make it easier to write.

image

So how do you write clean code? Well that is what the next chapters cover, for example meaningful names, small functions,  formatting, unit tests, etc. There are a lot of code in some of these chapters, and I mean a lot. So it can be pretty tedious at times. Robert tries to show step by step how he improves some existing pieces of real code but I found it hard to really follow exactly what he was doing as the code pages were so long and there was no color highlight to mark changed lines or syntax.

I think Robert should consider doing screencasts on writing clean code or on doing TDD, it is a medium that is better suited to showing how you in small steps evolve and change code. These could then later be included on CDs with his books.

Anyway despite it's problems Clean Code is still a good read. I did learn a few new tricks and will be more strict on keeping my functions shorter :)

Dam, writing installation & documentation it is boring. I never expected it to be this much work to just get a first release out the door. Well now that it is done I can spend time implementing new features again.

Yesterday I finished the implementation of the history log rss feed. I was also able to fix CSS issues with Firefox2 and Chrome as well. There is a new download available on www.codesaga.com, the demo site is also updated.

The rss feed was interesting to implement as it was the first time I ever implemented an rss feed. It was very easy, just a normal view but with the response content type set to text/xml.

<?xml version="1.0"?>
<viewdata model="HistoryRssViewModel" />
<viewdata urlContext="RepositoryUrlContext"/>
<rss version="2.0">
    <channel>
        <title>CodeSaga - ${Html.Encode(urlContext.ReposName)}</title>
        <link>
            ${Url.AbsoluteRouteUrl(RouteName.History, new {urlPath=urlContext.Url})}
        </link>
        <description>
            History for the ${Html.Encode(urlContext.ReposName)} repository and directory ${Html.Encode(urlContext.Path)}            
        </description>

        <item each="var changeset in ViewData.Model.Changesets">
            <var changesetUrl="Url.AbsoluteRouteUrl(RouteName.ChangesetDetail, new {urlPath=urlContext.Url, cs=changeset.Revision})" />
            
            <title>${Html.Encode(changeset.Message.TrimWithElipsis(90))}</title>
            <pubDate>${changeset.Time.ToString("R")}</pubDate>
            <author>${Html.Encode(changeset.Author)}</author>    
            <link>${changesetUrl}</link>
            <guid isPermaLink="false">${changesetUrl}</guid>
            
            <description>
                /// rss article html
            </description>                                    
        </item>        
    </channel>
</rss>

I omitted the rss item content to keep the sample smaller. The biggest change from a normal view is that all links and urls must be made absolute. For this I created a extension method, AbsoluteRouteUrl, on the UrlHelper class.

Since you cannot include css files in the item description you are limited to inline css, and most rss readers will parse the rss item html and remove unwanted tags and css. I tried a simple table layout with cell background colors, it looks pretty good in Newsgator:

image

I also tried Google Reader, it looks similar but not as good in google reader as it forces a specific width for the rss item content, not good for wide monitors (you can override this by using a greasemonkey script).

I haven't figured out a good way to unit test the rss feed yet. I am not talking about the controller action, that was simple, no I mean the actual rss output generated by the view. One way would be to do some serious mocking to get the view engine to work in a unit test or try Watin and see how it handles rss content.

I have a lot of fun stuff in the backlog for CodeSaga:

  • Interactive charts and graphs in Silverlight (why not I am doing this app for fun and this sounds like fun to do!)
  • Advanced search options and filtering, search on file content using Lucene.NET
  • Arbitrary diffs, side by side diffs, diff options (context lines)
  • File content view,
  • File history view 
  • Integration with TFS issue tracking
  • Parsing commit message for issue ids and link to issue

Will all this get done? I highly doubt it :) But it is good to have a plan / vision. How come I have had time to work on this? Well I haven't been coding much at work the last couple months so I have had a lot of will / energy to work on something after work and while hung over on Sundays.

And to be honest it just began as a experiment and grew to something more, and the last couple of weeks it has been like "well you have spent all this time working on this, you might as well release it and not let all that time go to waste". Not that it would have been a waste, I feel like I am getting very efficient with ASP.NET MVC. It feels kind of daunting now that it has been released, because now I feel even more obliged to work on it.