ASP.NET MVC Preview 3 was released today and while browsing through the code I saw something which I have seen before but never really reflected on.

protected internal ViewResult View() {
    return View(null /* viewName */, null /* masterName */, null /* model */);
}

protected internal ViewResult View(object model) {
    return View(null /* viewName */, null /* masterName */, model);
}

See how they have an inline comment with the argument name for the null arguments? This is a really nice practice as it makes function calls where nulls are passed much more readable and understandable. Functions that take in objects that can be null or booleans can often be very cryptic since you don't know what argument they represent. In some languages like ruby you can optionally name arguments:

def View(model)
  return View(viewName: nil, masterName: nil, model: nil)
end

For functions taking in booleans you have the option to instead use an enum, this can in some cases make function calls a lot more understandable. For example:

FileStream stream  = File.Open (“foo.txt”, true, false);
FileStream stream  = File.Open (“foo.txt”, CasingOptions.CaseSenstative, FileMode.Open);

For parameters that can be null you could instead use the null object pattern but this might not be feasible in many scenarios so I think that having a short comment like in the first code snippet is a nice way to handle this.

image

A couple of weeks ago I finished a great book named The Language Instinct: How the Mind Creates Language by the psychologist and cognitive scientist Steven Pinker. It is not a book directly related to programming, but if you are interested in languages (be that human or computer languages) or just popular science in general I think you will enjoy it. It covers the theory that all human language share a common universal grammar, and that this grammar is innately encoded in the human brain.

It goes into great detail on how children learn language, a feat that every human child do by just listing to adults talk. The book can be quite technical with a lot linguistic jargon sometimes, but it is still very readable. I really like the chapter about how creole languages evolve from pidgin languages.

A pidgin language is (wikipedia):

A pidgin is a simplified language that develops as a means of communication between two or more groups that do not have a language in common, in situations such as trade. Pidgins are not the native language of any speech community, but are instead learned as second languages.

A pidgin is a language with a very simplified or non existent grammar which require a lot of context to be understandable (like physically pointing to what or who are are talking about). A creole language is a well defined and stable language that has evolved from a pidgin language. A popular theory (the one discussed in the book) describes how creole languages are created by young children learning a pidgin language as a native language, and in that process creates a more complex grammar, with fixed phonology, syntax, morphology, and syntactic embedding. I think it is amazing that children can create a fully developed language in only a single generation.

The book also covers the evolution of the human language ability, how languages change over time and how brain damage effects speech and speech recognition.

Sorry for a post about non .net/programming, but I just wanted to share this, and yes I was reading this book when I came up with the title for this blog, oh.. and while I am at it here are some other great popular science books that I read recently which I can highly recommend:

There is new project on codeplex called BooLangStudio that adds Boo as supported language in Visual Studio 2008. It is still in it's infancy but the basics are there, Class Library and Console Application projects are support, as is basic syntax highlighting. The project was created by Jeff Olson, but there are other contributors, for example James Gregory who has begun porting over some intellisense support from his Boo Visual Studio plugin.

The progress is looking good:

image 

 

I have been wishing for some Boo visual studio integration for a long time. I thought I would try to contribute to BooLangStudio, however the source is on github. That means I had to learns some Git :)

It wasn't that bad actually, I downloaded msysgit and followed this Guide. The cool thing about Git is it's distributed nature which makes it very different from Subversion. The first thing I did, since I wanted to contribute was to fork Jeff Olson's repository. This is very simple, just register on github and you click the fork button:

image

When you fork a github repository you get your own remote repository that you can push commits two. After I had created my fork I could ask git to create local clone of that repository. Git does not have local "working copies" but local full repositories. This means that you can view history logs, do commits, merges, branching all locally, without any network connection.

When I had done a commit I did a push to my github remote repository. Github has very nice commit/diff visualizations:

 image

 

In your local repository you can configure a list of other remote repositories that you can pull from (pull = fetch+merge). I noticed that James Gregory had a fork where he committed his initial work on intellisense, and I wanted to try this out. So I created a local branch, added James Gregory's fork as a remote repository and pulled his changes into that branch. The cool thing is that merging in git keeps the complete history of the commits you merge, it almost looks like James Gregory committed directly to my repository.

Everything is done at the command line, there is a TortoiseGit in the works I think but I am not sure what state it is in. Here is an example of how you can merge changes from a remote repository:

$ git remote add jagregory git://github.com/jagregory/boolangstudio.git
$ git checkout -b jagrefory/master
$ git pull jagregory master
$ git checkout master
$ git merge jagregory/master

Github has a very cool network graph that shows forks and commits and where they came from:

image

The red dots on my line represents the commits that I merged from the jagregory branch. If you hover over a a dot it will show the commit info! Git and the whole concept of a distributed source control system is very cool and interesting. The usability aspects are not quite there yet, it's a lot of git commands and parameters to learn, and it takes some time to get how the concepts work.

At developer summit in Stockholm a couple of weeks ago I attended a great talk by Jim Webber titled "Guerrilla SOA". On one slide he listed some causes for why SOA and ESB solutions quickly degrade to a pile of spaghetti.  One of the causes was that developers take:

"The Path of least resistance for individual applications"

This line really resonated with how I feel, not in a SOA scenario but for application architecture or just code in general. When you write code you constantly fight either your own design/architecture or some framework and what many developer end up doing (including me sometimes) is taking the path of least resistance, and depending on the application design this might often not be the right path.

Ayende had a post a couple of days ago about zero friction & maintainability in which he writes:

"As it turn out, while code may not rot, the design of the application does. But why?

If you have an environment that has friction in it, there is an incentive for the developers to subvert the design in order to produce a quick fix or hack a solution to solve a problem. Creating a zero friction environment will produce a system where there is no incentive to corrupt the design, the easiest thing to do is the right thing to do.

By reducing the friction in the environment, you increase the system maintainability"

A big goal when designing an application or framework should be to make the path of least resistance the right path. This is of course easier said than done and I think you will never get the perfect design where every right way is the easy way. So when we head down the path of least resistance knowing it is the wrong way, why do we do it? It could be time constraints, plain laziness or some other hurdle. When making the decision it is important to weigh in possible future problems and maintainability issues that could potentially be very costly.

Well that is all I had to say about this for now, stay away from the path of least resistance (unless it is the right path!).

There is a new framework in the NHibernate Contrib project named NHibernate.Validator. The project began as port of the java Hibernate.Validator project and is started by Dario Quintana. This framework allows you to validate objects in a similar way to other validation frameworks except that it has out of the box integration with the NHibernate's entity lifecycle. This means that you can configure it to do validation on entity insert/updates. The integration with NHibernate is not required however.

You can specify the validation rules either as property/field attributes directly in the code or in a separate xml file with a schema similar to that used in NHibernate mapping files.

Example:

public class User
{
  public virtual int Id
  {
      get { return id; }
  }

  [NotEmpty, NotNull]
  public virtual string UserName
  {
      get { return userName; }
      set { userName = value; }
  }

  [Email]
  public virtual string Email
  {
      get { return email; }
      set { email = value; }
  }

  [Past]
  public DateTime CreatedDate
  {
    get { return createdDate; }
    set { createdDate = value; }
  }

  [Min(18, Message="You are to young!")]
  public int Age
  {
    get { return age; }
    set { age = value; }
  }

  [CreditCardNumber]
  public string CreditCardNumber
  {
    get { return creditCardNumber; }
    set { creditCardNumber = value; }
  }
  
  ///... 
}

If you don't like to clutter your code with attributes you can use the xml config option, example:

<nhv-mapping xmlns="urn:nhibernate-validator-1.0">
  <class name="NHibernate.Validator.Demo.Winforms.Model.Customer, NHibernate.Validator.Demo.Winforms">    
    <property name="FirstName">
      <not-empty/>
      <not-null/>
    </property>    
   
    <property name="Email">
      <email/>
    </property>

    <property name="Zip">
      <pattern  regex="^[A-Z0-9-]+$" message="Examples of valid matches: 234G-34DA | 3432-DF23"/>
      <pattern  regex="^....-....$" message="Must match ....-...."/>
    </property>    
    
  </class>  
</nhv-mapping>

Personally I can barley stand the NHibernate xml mapping files (don't like to poke around in xml files that much) so I think I prefer the attribute version. But it is nice to have this option. It makes it possible to validate objects in third party assemblies that you do not have the code for. There are many more validators than the ones used above and it is very easy to create custom validators.

You can configure the validation engine in code or in app/web.config, this is how you can do it in code:

NHVConfiguration nhvc = new NHVConfiguration();
nhvc.Properties[Environment.ApplyToDDL] = "false";
nhvc.Properties[Environment.AutoregisterListeners] = "true";
nhvc.Properties[Environment.ValidatorMode] = "UseAttribute";
nhvc.Mappings.Add(new MappingConfiguration("NHibernate.ValidatorDemo.Model", null));

ValidatorEngine validator = new ValidatorEngine();
validator.Configure(nhvc);

Since NHibernate.Validator uses the event listener system I think you have to use NHibernate 2.0 if you want the NHibernate integration. To validate an object you simply use the Validate function, here is a simple example (using MonoRail):

public void Create([DataBind("user")] User user)
{
  InvalidValue[] errors = validator.Validate(user);

  if (errors.Length > 0)
  {
    Flash["errors"] = errors;
    RedirectToAction("index");
  }
  else
  {
    repository.Create(user);
  }
}

And here is the view:

<div>
    <h2>Create user</h2>    
    
    ${Html.FormToAttributed("Home", "Create", {@id: "create-form"})}
    
    <p>    
        <label for="UserName">UserName:</label>
        ${Form.TextField("user.UserName")}
        
        <label for="UserName">Email:</label>
        ${Form.TextField("user.Email")}
        
        ${Html.SubmitButton("Submit")}
    </p>
            
    ${Html.EndForm()}
    
    
    <?brail if IsDefined("errors"): ?>
        <ul>
        <?brail for error in errors: ?>
            <li>${error.PropertyName}: ${error.Message}</li>    
        <?brail end ?>
        </ul> 
    <?brail end ?>
</div>

This is of course a very simple example. You probably want to have the error message after each field and some client based validation. I think I will do another post about trying to integrate NHiberante Validator with JQuery validation.

If you want to try NHibernate validator and do not want to build it from the trunk there is a 1.0 alpha release available on sourceforge. The release contains a WinForms examples that shows how to integrate it with the WinForms error provider system.

Last week Fredrik Normén had a couple of nice posts on argument validation using C# extension methods. It got me thinking on a different approach where a method interceptor could handle the validation.

First a short recap on argument validation. I guess everyone have seen or written code with some kind of argument validation, like this for example:

public void Add(string value)
{
  if (string.IsNullOrEmpty(value)) 
      throw new ArgumentException("The value can't be null or empty", "value");
    
  ///...
}

To make argument validation easier I have (to a small extent) used a modified version of this design by contract utility class. This utility class lets you write argument validation like this:

public void Add(string value)
{
  Check.Require(value != null, "value should not be null");
    
  ///...
  
  Check.Ensure(Count > 0);
}

So this got me thinking on how you could handle argument validation using method interception. With a method interceptor you could inspect the function arguments checking for validation attributes. I envisioned code looking like this: 

[Ensure(() => this.Count > 0)]
public void Register([NotNull] string name, [InRange(5,10)] int value)
{
  ///...
}

Getting the ensure post condition to work like that will be impossible since you can only use constant expressions in attribute constructors. This is something I think they should look at for C# 4.0, being able to define lambdas in attribute arguments would make interesting scenarios possible (like this). The NotNull and InRange attributes were very simple to implement: 

public abstract class ArgumentValidationAttribute : Attribute
{        
  public abstract void Validate(object value, string argumentName);
}

[AttributeUsage(AttributeTargets.Parameter)]
public class NotNullAttribute : ArgumentValidationAttribute
{
  public override void Validate(object value, string argumentName)
  {
    if (value == null)
    {
        throw new ArgumentNullException(argumentName);
    }
  }    
}    

[AttributeUsage(AttributeTargets.Parameter)]
public class InRangeAttribute : ArgumentValidationAttribute
{
  private int min;
  private int max;

  public InRangeAttribute(int min, int max)
  {
    this.min = min;
    this.max = max;
  }

  public override void Validate(object value, string argumentName)
  {
    int intValue = (int)value;
    if (intValue < min || intValue > max)
    {
      throw new ArgumentOutOfRangeException(argumentName, string.Format("min={0}, max={1}", min, max));
    }
  }
}

This is of course the easy part. It is the actual interceptor that is making the magic happen. This implementation is just a proof of concept, and not optimal from a performance stand point.

public class ValidationInterceptor : IInterceptor
{
  public void Intercept(IInvocation invocation)
  {
    ParameterInfo[] parameters = invocation.Method.GetParameters(); 
    for (int index = 0; index < parameters.Length; index++)
    {
      var paramInfo = parameters[index];
      var attributes = paramInfo.GetCustomAttributes(typeof(ArgumentValidationAttribute), false);

      if (attributes.Length == 0)
        continue;

      foreach (ArgumentValidationAttribute attr in attributes)
      {    
        attr.Validate(invocation.Arguments[index], paramInfo.Name);
      }            
    }

    invocation.Proceed();
  }        
}

To make this interceptor more feasible for production you would probably have to add some smart caching mechanism. This interceptor based argument validation is of course somewhat limited in applicability. You have to use a dynamic proxy generator to get the interceptor functionality and only interface and virtual functions can be intercepted. But it is an interesting idea and shows another nice usage scenario for method interception.

The concept of being able to intercept method calls is something that would be great if it was built into a future version of the CLR runtime. That way static and non virtual methods could be intercepted. It is a nice way to implement cross-cutting concerns (like validation). But until then we will have to live with proxy generators, it works nicely for interfaces:

public interface IRegistrationService
{
  void Register([NotNull] string name, [InRange(5, 10)] int value);

  void Register([NotNullOrEmpty] string name);
}

If you specify the attributes on an interface you do not need to duplicate them on the implementation methods! A small update to the interceptor to support validation of the return value would allow something like this:

public interface IUserRepository
{
  [return: NotNull]
  User GetById([GreaterThanZero] int id);
  
  ///...
}

The syntax for attributes on return values are kind of funky. If Spec# isn't the future (I hope it is) then maybe something like this can be :)

The ideas and concepts that the Microsoft research team has realized in spec# are too great not to be included in a future version of C# !

Just look a this example:

public static int Subtract(int x, int y)
     requires x > y;
     ensures result > y;
{
     return x - y;
}

It is not only the runtime aspects of spec# that are exiting. It is the amazing level of static analysis that they have implemented that at compile time validates (across many method boundaries) that requirements are uphold.

C# has really evolved a lot in the last two iterations (especially in 3.0), and think this fast evolution has given it a good edge compared to other static languages (java). But why stop now? Here are a few more things I would like in C# 4.0 :)

Extensible compilation pipeline:

The Boo .NET language has this concept of an extensible compilation pipeline that allows for really powerful usage scenarios. With it you can extend the language with new keyword constructs and attributes that actually manipulate the abstract syntax tree (AST). Here is an example where I have extended Boo with some constructs for design by contract:

[ensures(total > 0)] 
def Add(int value):
    requires value > 0
    total += value

Extending the Boo language like this is very trivial. For example the requires keyword is implemented like this:

macro requires:
  return [|
    if $(requires.Arguments[0]) == false:
        raise ArgumentException("requires precondition failed")
    |]

This is just a simple example, you can do more powerful stuff. A Boo macro is very different from C++ macros. The Boo macros operate on the compiler's AST and can query the object model and modify it in very powerful ways. The open compiler architecture makes Boo a language suitable for writing domain specific languages. I doubt that something like this will be implemented in C# though since new compiler features are more likely to break existing implementations. Something I think the C# team are not allowed to do.

Dynamic method invocation and duck typing:

If a class implements the IQuackFu interface in Boo you can call methods on it that are resolved at runtime. This is similar to method missing in ruby. The IQuackFu interface looks like this:

public interface IQuackFu
{
  object QuackGet(string name, object[] parameters);
  object QuackSet(string name, object[] parameters, object value);
  object QuackInvoke(string name, params object[] args);
}

This interface can for example be used as front to an xml document, allowing you to access elements as properties. I think the C# team is actually considering something like this if you are to believe the rumours from the MVP Summit.

Anyone else having ideas or a whish list for C# 4.0?

I am constantly surprised by the power and elegance of of JQuery and how using it can change they way you create web applications.

JQuery is not just a simple javascript library that makes ajax and javascript coding easier to write, it also allows you to separate your client-side logic from your html, making it much more cleaner and easier to maintain.

Lets see how JQuery can improve your html with a concrete example. The requirements are:

  • A html table that lists tickets
  • Table rows should be alternating (colored)
  • Table rows should have a mouseover highlight effect
  • If you click a row a ticket detail should be shown

First I will show how a non JQuery solution could look like, here is simple solution using a web forms repeater (chosen for good effect).

<asp:Repeater ID="ticketRepeater" runat="server">
  <HeaderTemplate>
    <table class="ticket-table" border="0">
      <tr>
        <th>Id</th>                        
        <th>Description</th>                        
      </tr>                
  </HeaderTemplate>
  
  <ItemTemplate>
    <tr class="even" onmouseover="highlight_over(this);" 
             onmouseout="highlight_out(this);"
             onclick="selectTicket(<%# Eval("Id") %>)">
      <td><%# Eval("Id") %></td>                    
      <td><%# Eval("Description") %></td>                    
    </tr>    
  </ItemTemplate>
  
  <AlternatingItemTemplate>
    <tr class="odd" onmouseover="highlight_over(this);" 
            onmouseout="highlight_out(this);"
            onclick="selectTicket(<%# Eval("Id") %>)">
      <td><%# Eval("Id") %></td>                    
      <td><%# Eval("Description") %></td>                    
    </tr>    
  </AlternatingItemTemplate>    
              
  <FooterTemplate>
    </table>
  </FooterTemplate>
</asp:Repeater>    

The worst part of the above code is the complete duplication of the entire item template, completely unnecessary in this case as the class name is only difference. I have seen the repeater being used like this in numerous examples and in real applications. The mouseover and mouseout part is something that could be handled by ":hover" css selector, however this is not supported by IE (without resorting to javascript methods).

Here is the above markup but now taking advantage of JQuery:

<asp:Repeater ID="ticketRepeater2" runat="server">
  <HeaderTemplate>
    <table class="ticket-table" border="0">
      <thead>
        <tr>
          <th>Id</th>                        
          <th>Description</th>                        
        </tr>
      </thead>
      <tbody>
  </HeaderTemplate>
  
  <ItemTemplate>
    <tr>
      <td><%# Eval("Id") %></td>                    
      <td><%# Eval("Description") %></td>                    
    </tr>    
  </ItemTemplate>                
              
  <FooterTemplate>                    
      </tbody>
    </table>
  </FooterTemplate>
</asp:Repeater>    

The alternating template is removed, so it the mouseover, mouseout and click event handler. How do we restore this missing functionality? Here is the JQuery javascript code that restores it:

<script type="text/javascript">
        
        $(document).ready(function() {
            
            $(".ticket-table tbody tr:not([th]):odd").addClass("odd");
            $(".ticket-table tbody tr:not([th]):even").addClass("even");
            
            $(".ticket-table tbody tr").mouseover( function() {                    
                    $(this).addClass("hover");
            }).mouseout( function() {
                    $(this).removeClass("hover");
            });
            
            $(".ticket-table tbody tr").click(function() {
                var id = $("td:first", this)[0].innerHTML;
                // handle the selection of this ticket id                                
            });
            
        });
        
 </script>

JQuery's strength is built around it's css based element selection expressions. The $() function is a JQuery function, which returns a list of elements that matched a JQuery "query". The list returned is a JQuery object which has a long list of useful functions (like addClass, hide, show, etc). The cool thing is that when you call a function like addClass, this action will be applied to all elements which the JQuery expression selected. Good bye to unnecessary loops! The mouseover/mouseout JQuery functions takes as an argument the handler for the event, and here you can just use a javascript closure. Fetching an id from the content of a table cell is perhaps not a recommended approach. It would be better to store it in the row element id.

The new solution introduced some extra lines of javascript code, but your html is much cleaner. The most important aspect is that you separate the behavior from the markup which will (hopefully) make the solution more maintainable.  The web forms repeater is a little verbose , it is surprising how much cleaner it can be if you just use inline scripting.  Here is how it would look like if you used MonoRails brail view engine:

<table class="ticket-table" border="0">
  <thead>
    <tr>
      <th>Id</th>                        
      <th>Description</th>                        
    </tr>
  </thead>
  <tbody>
  
  <?brail for ticket in tickets: ?>
      <tr>
        <td>${ticket.Id}</td>                    
        <td>${ticket.Description}</td>                    
      </tr>
  <?brail end ?>
  
  </tbody>
</table>

Another aspect where JQuery can be a big help is in web forms applications where element ids often complicate javascript code. I found that JQuery's element selection could in many cases work around this problem (by finding elements based on class name for example).

There is also a very large list of JQuery pluggins / extensions, here are two which I can recommend:

  • Validation (very feature rich client side form validation)
  • Metadata (allows you to have json metadata embedded in the html class attribute)

Nate Kohari informed me that he fixed the performance issue that I discovered in my last test, so I thought I rerun the benchmark against the trunk version (revision 62) of Ninject.

I have also included another new container named Autofac, written by Nicholas Blumhardt and Rinat Abdullin.

This container has nice list of features, here are a few:

  • Autowiring (with out any intrusive attributes)
  • XML Configuration Support
  • Nice C# registration API (including possibility to create components with expressions)
  • Module system (nice way to structure parts of your application)
  • Nested containers

The registration API is at first glance like any other IoC container except it also provides the ability to override the autowiring by defining how components are created by using lambda expressions. Here is an example:

// using autowiring
builder.Register<UserController>().FactoryScoped();
// using expressions
builder.Register(c => new UserController(c.Resolve<IUserRepository>(), c.Resolve<IAuthentificationService>()))
  .FactoryScoped();

The second method could potentially be a lot faster since in it you create a anonymous method that creates the object directly. There is a lot more code to write and if you change the constructor you need to update the registration code, so I am not sure you would want to use it to for most components. To be fair to the other containers in the test I tested Autofac with both autowiring registration and with expression based registration. For more detail on how the benchmark works please view the first benchmark post.

Another interesting feature of Autofac is it's support for nested containers with predicable component cleanup. 

var container = // ...
using (var context = container.CreateInnerContainer())
{
  var controller = context.Resolve<IController>();
  controller.Execute(); // use controller..
}

In the above example the controller and all it's dependencies that implements IDisposable will be disposed.

Here are the results:

IoCSingleton_Autofac

IoCTransient_Autofac

It is nice to see that the Ninject problem has been solved. When using Autofacs expression (lambda) based registration API the results are, not surprising, a lot quicker than the other containers. I would have almost through that the performance difference was going be bigger, seeing how fast the new operator was in my first benchmark (where I made a crude comparison with using the new operator directly to create all the instances).  

My conclusion is the same as after the first test, the performance difference is not significant to warrant consideration when choosing a IoC container. Unless you create a incredibly large amount of transient components (not recommended), then maybe use Autofac :)

It was fun to try out Autofac, it is looking like an interesting contender, and might even replace Castle Windsor as my favorite. I doubt it though, Castle Windsor probably has the largest user base and I am pretty familiar with it's codebase / extension points. But who knows, you shouldn't always stick to what you know :)

For the benchmark code: IoCBenchmark_ReRevisted.zip

If you want to automate some of your manual integration testing I can highly recommend WatiN. It is a great library for doing browser automation. The new version allows you to automate Internet Explorer and Firefox through a common interface.

example:

[Fact]
public void SearchForCodingInstinctOnGoogle()
{
  using (IBrowser ie = BrowserFactory.Create(BrowserType.InternetExplorer))
  {
    ie.GoTo("http://www.google.com");
    ie.TextField(Find.ByName("q")).Value = "Coding Instinct";
    ie.Button(Find.ByName("btnG")).Click();
    Assert.True(ie.ContainsText("Coding Instinct"));
  }
}

I am using xUnit.NET in the above example (Fact = Test). xUnit.NET has been mentioned in a lot of blogs lately and I think for good reasons. It comes bundled with a very nice set of extensions that are very useful, especially for integration testing.

ExcelData:

One of the extensions that comes with xUnit is an attribute that lets you fetch test data from an excel file, xUnit will then execute your test method for each row in the excel file. Here is an interesting usage scenario:

[Theory, ExcelData(@"Resources\PortalUrls.xls", "select * from PortalUrls")]
public void UrlShowsExptectedContent(string url, string contentText, bool requireLogin)
{
  InitBrowser(url);
  
  if (requireLogin)
  {
    LoginAction.WithAS();
  }

  Assert.True(Browser.Text.Contains(contentText), string.Format("\"{0}\" was not found", contentText));
}

My current customer is using a portal application to host a number of separate applications (that use a common SSO). With this simple test I was able to check all portal URLs and with a very simple validation check that the applications (and the login) was up and running.

The excel file:

In the excel file you must use the function "Name a Range". This function can be found by selecting the table, including the header and right clicking. It is also found under the "Formulas" tab (Define Name, Name Manager).

The test above is of course a very simple test, It only checks that the portal is configured correctly and that all applications are up and running, but it is a good start. Now I can go into each application/feature and write automated tests for each of the manual test cases.

UI test fragility:

When writing browser tests you need to do a lot of html element selections so the tests become very fragile to UI changes, this is especially true for WebForm applications where element names and ids are constantly changed for minor UI changes.

To combat this problem I tend to place most of the element selections in static helper classes. I also use a RegEx to identify elements. The reason for using a RegEx is to ignore a large part of the id prefix that the WebForm html renderer so gladly adds. example:

public class LoginAction : ActionBase
{
  public static void WithAS()
  {
    Browser.TextField(new Regex(".*txtUserName")).Value = "xxxx";
    Browser.TextField(new Regex(".*txtPassword")).Value = "xxxx";
    Browser.Button(new Regex(".*btnLogin")).Click();
  }
}

There are many more useful xUnit extension attributes, for example: InlineData, PropertyData and SqlServerData. For more information on WatiN and xUnit.NET here are some links:

In most applications you usually store and cache some user information in a session object. Most actions are dependent on this information, which means that many services require this information. This problem can be solved using different approaches. What I usually end up with is creating a user ticket class which holds maybe a subset of the user information (or the complete user class) and other session specific information. The simplest approach is then to pass this object along to each service / method that needs it:

public class WishListService
{
  private IWishListRepository wishListRepository;

  public WishListService(IWishListRepository wishListRepository)
  {
    this.wishListRepository = wishListRepository;
  }

  public void AddToWishList(IUserTicket ticket, Product product)
  {
    // handle all the important stuff
  }
}

The problem with this approach is that all callers need to fetch the user ticket. If you have deep method stacks and the deepest component needs the session information then all methods needs to pass the object along.

A sometimes nicer approach is to let the components fetch the session information on their own (through an interface):

public class WishListService
{
  private IWishListRepository wishListRepository;
  private ITicketAccessor ticketAccessor;

  public WishListService(IWishListRepository wishListRepository, ITicketAccessor ticketAccessor)
  {
    this.wishListRepository = wishListRepository;
    this.ticketAccessor = ticketAccessor;
  }

  public void AddToWishList(Product product)
  {
    IUserTicket ticket = ticketAccessor.Current();
    // handle all the important stuff
  }
}

This approach makes the component easier to use and the method calls are simpler, dependencies to the session (ticket) information is handled by the ITicketAccessor which can be inject by a IoC container.

Another approach is to have a static RuntimeContext class which static properties to access the session information from anywhere in your application:

public class RuntimeContext
{
  private const string KEY_TICKET = "RuntimeContext.Ticket";        

  /// <summary>
  /// Used to access the ticket for the current request
  /// </summary>
  public static IUserTicket Ticket
  {
    get
    {
      // try local execution context first
      IUserTicket ticket = Local.Data[KEY_TICKET] as IUserTicket;
      if (ticket != null)
        return ticket;

      // try session
      if (Local.RunningInWeb && HttpContext.Current.Session != null)
      {
        ticket = HttpContext.Current.Session[KEY_TICKET] as IUserTicket;
        // cache in local 
        if (ticket != null)
          Local.Data[KEY_TICKET] = ticket;
      }

      return ticket;
    }
    set
    {
      Local.Data[KEY_TICKET] = value;

      // store in session if running in web 
      if (Local.RunningInWeb)
      {
        HttpContext.Current.Session[KEY_TICKET] = value;
      }
    }
  }
}

The Local class in the above code is an abstraction over HttpContext.Current.Items and Thread data depending on if the code is being run in web context or not. This last approach is very easy to work with but you have less control over where and how the data is accessed.

There is also Thread.CurrentPrincipal which is a good to use, especially for user authorization. I am not sure which of the approaches I like more, the second one is probably the best one from a purist / TDD point of view. I will post a question in the ALT.NET mailing list to find out what solutions smarter people than me have arrived at.