I am currently trying to prototype how a portal application could be build using ASP.NET MVC (I would prefer to use MonoRail but can't).

There is a big problem in the way the default implementation of the IControllerFactory is handling controller types. It is using a simple dictionary to map controller names to controller types. The key in the dictionary is only the controller name (like "Home" for a HomeController type), this means that you cannot have controllers with the same type name (no matter what namespace or assembly the controller is in). In big applications you probably want to group your controllers into areas in which case a controller of the same name could be likely.

So currently it is not possible to group controllers into what MonoRail calls Areas.  To do this you would have to implement IControllerFactory from scratch since the root of the problem is the ControllerTypeCache which is being used by the DefaultControllerFactory, and the ControllerTypeCache is of course internal and sealed. There are probably other changes that would need to be made to introduce the concept of areas.

There is a extensibility point named IViewLocator used by the WebFormViewEngine which can be exchanged to change the way the view engine locates views. A simple example:

public class PortalViewLocator : ViewLocator
{
      public PortalViewLocator ()
      {
          ViewLocationFormats = new[] {
              "~/CustomViewDir/{1}/{0}.aspx",
              "~/CustomViewDir/{1}/{0}.ascx",
              "~/CustomViewDir/Shared/{0}.aspx",
              "~/CustomViewDir/Shared/{0}.ascx"
          };
          MasterLocationFormats = new[] {
              "~/CustomViewDir/{1}/{0}.master",
              "~/CustomViewDir/Shared/{0}.master"
          };
      }
}

To solve the problem with controller areas MonoRail has something it calls a Controller Tree which serves the same role as the ASP.NET MVC ControllerTypeCache. In MonoRail the area of a controller is specified using the optional ControllerDetails attribute:

using Castle.MonoRail.Framework;

[ControllerDetails(Area="admin")]
public class UsersController : Controller
{
    public void Index()
    {
    }
}
With the default routing the above index function would correspond the URL "/admin/users/index". If ASP.NET MVC adopted a similar approach it could check for an attribute in it's clever Html helper functions which takes in lambda functions (i.e. expression trees):
Html.ActionLink<UserController>(c => c.Register(), "Register now!")

I understand that they want the keep the ASP.NET MVC framework as lightweight and simple as possible and only add features when they are really needed (which is a good philosophy) and have extensibility points for specific scenarios. However I think that a concept like controller areas should built into the framework.

I also hope they start looking at scenarios where you might want to create a modular web applications composed of multiple sub projects. I guess it would be possible to do that now but it would be very hard and would require you to replace many of the default implementations in the MVC framework.

Microsoft seems to be about reinventing the wheel again, Krzysztof blogged about a new framework being worked on with these goals:

MEF is a set of features referred in the academic community and in the industry as a Naming and Activation Service (returns an object given a “name”), Dependency Injection (DI) framework, and a Structural Type System (duck typing)

They are currently just experimenting with different approaches and a CTP is likely to be released soon. The example that Krzysztof showed might not be indicative of how it will eventually turn out but it doesn't look very promising (requiring property attributes to declare dependencies, etc).

The approach I was hoping that Microsoft would take was to try and incorporate the concepts of an IoC container but make the concrete container pluggable. Maybe even to introduce this concept on the CLR level or maybe just C#/VB. Maybe something like this:

this.authService = resolve IAuthentificationService

If C# could introduce a new keyword (resolve) that would request a component from a container. The .NET framework would probably need a basic implementation of the container but it should be completely pluggable so you can exchange it with for example Castle Windsor or StructureMap. I haven't really thought this through but If Microsoft is going to include an IoC Container in the core framework this is the approach I wish they would at least try (maybe they have?).

Yesterday I held a short seminar (2 hours) on Open Source and .NET for the Systems Development team at Cybercom Sweden East. The seminar was split into three separate presentations, I will try to give a short summery of them here. At the end of the post I will post links to the powerpoints and application examples.

Open Source and .NET 

This presentation covered a short introduction to open source. I began with explaining the most common licences (GPL, LGPL, Apache) and then moved on to list a few interesting framework and development tools. But mostly it was about why I think utilizing open source is something that we as .NET developers need to be better at, and why this will be very important if the .NET platform is going to stay competitive.

I have never worked professionally as a java developer but my understanding is that tools like Eclipse and frameworks like Struts, Hibernate and Spring are used to a very a large extent (30 - 70% according to some studies). And what types of platforms do java applications run on? They usually run on an open source OS and use an open source web server (Apache) or application server. This means that the devolvement environment, tools and frameworks are largely based on open source software.

I think this fact contributes to the mindset of java developers and project managers to actively search for, and be more open to, open source solutions. I feel that many .NET developers are prone to constantly reinvent the wheel, for example by creating their own logging framework or a basic data layer tool without ever searching for existing alternatives, be that open source or commercial.

Open Source benefits

I also talked about some of the benefits I see with open source frameworks compared to Microsoft's offerings. One of the main things I mentioned here was innovation, I think that a lot of innovation in development methodologies (TDD), design, frameworks and tools are coming to large extent from the open source community. The fact that open source software starts without commercial aspirations or requirements results in software that is more focused on solving real problems and less on flashy designers.

Open Source problems

The major problem I see with open source is the lack of commercial support services. I don't see this personally as a problem but as a problem when trying to sell open source solutions to management. There are many companies that are offering training and support for open source frameworks and tools on the java side but very few for the .NET equivalents. I think this point is also one of the major reasons why .NET  is so behind java when it comes to open source adoption. The lack of commercial support is something I partly blame on Microsoft who instead of promoting and supporting open source actively compete with it.

Open Source trends

I ended the presentation by showing some statistics from the IOUG Open Source 07 Report. This report really shows how much open source software is currently being used by big enterprise companies (almost exclusive non .NET software) and that this is an increasing trend. Just between 2006 and 2007 the increase in the use of open source software was 26%. The report also showed the reasons why open source software is being used. The two main reasons were cost savings and no proprietary vendor lock-in.

Not to be misunderstood, I am not an open source fundamentalist, I just think there is a lot of value in some open source frameworks and tools and I think .NET developers in general (maybe not those who happen to read this blog) should utilize this resource more. 

Log4Net & NHibernate

I also held a presentation about Log4Net and an hour long presentation about object-relational mapping with NHibernate which included a short live coding demo. 

Here are the powerpoints (they are in Swedish):

I also made some sample applications:

I wanted some code in this post so here is snippet from the NHibernateDemo application that shows how simple property projection is with the NHibernate Critiera API:

public IList<string> GetAllBlogOwnerNames()
{
  return Session.CreateCriteria(typeof(User))
    .Add(Expression.IsNotEmpty("Blogs"))
    .SetProjection(Projections.Property("UserName"))
    .List<string>();
}

Someone (Joshua) mentioned Ninject in the comments to in my IoC Benchmark. I hadn't heard of this project before but after checking out ninject.org and seeing the project slogan "lighning-fast dependency injection" I felt that it deserved be included in the test.

I am not sure what the slogan refers to though, lightning-fast usage and setup or runtime performance? Anyway it has a nice fluent registration API:

Bind<IUserRepository>()
  .To<LdapUserRepository>()
  .Using<SingletonBehavior>();
  
Bind<IAuthentificationService>()
  .To<DefaultAuthentificationService>()
  .Using<SingletonBehavior>();

Bind<UserController>()
  .ToSelf()
  .Using<SingletonBehavior>(); 

The tests were made with the release build of RC1 downloaded from the project homepage. I was kind of surprised by the results:

IoCSingleton_WithNinject
IoCTransient_WithNinject

I might be doing something wrong here but this is the result I got. For the transient case there seems to be a big memory leak. The Ninject kernel seems to keep references to transient objects. I tried the kernel's Release function but the memory leak was still evident.

Just to check that it was not something wrong with my code I did a profiling trace with JetBrains dotTrace:

NinjectTrace

It could still be me setting something up in the wrong way, but it looks like it's a bug in the Ninject Core.

Just a note: I think that doing premature optimization can result in bad design and architecture. The reason I did this and the previous test was not to find the fastest container, it was firstly to get a change to play with new (to me) containers, and secondly to see if there were any significant performance difference between them worthy of taking into account when choosing a container.

My conclusion in the previous test was that the difference in performance was not big enough to be relevant compared to other aspects of the containers, like how much you like the API or the features. The Ninject result for the transient test above is very significant, however it is probably caused by a memory bug in the release candidate and will likely be fixed in the next release. If you disregard this bug I actually kind of liked Ninject, it a had a nice API and a way of doing things, but it's slogan is currently a little misleading :)

For the complete code: IoCBenchmark_Revisited.zip

Updated (2008-05-05):

Nate Kohari has fixed the performance issue, the results are now inline with the other containers. For the new results (which also includes Autofac) please view this post.

Here is an interesting problem, try writing the bellow Linq query using lambda expressions.

var usersWithBigOrders =
  from usr in context.Users
  from ordr in usr.Orders
  where ordr.Total > 10
  select usr;

I ran into this interesting Linq question when writing yesterday's post. To figure out what the above query actually does I used Reflector. The code bellow is what the C# compiler will generate (when reverted back from IL to C#):

context.Users.SelectMany(
Expression.Lambda<Func<User, IEnumerable<Order>>>(
  Expression.Property(CS$0$0000 = Expression.Parameter(typeof(User), "user"),
   (MethodInfo) methodof(User.get_Orders)), new ParameterExpression[] { CS$0$0000 }),
    Expression.Lambda(
      Expression.New((ConstructorInfo) methodof(<>f__AnonymousType0<User, Order>..ctor,
       <>f__AnonymousType0<User, Order>), 
       new Expression[] { CS$0$0000 = Expression.Parameter(typeof(User), "user"), 
       CS$0$0002 = Expression.Parameter(typeof(Order), "order") }, 
       new MethodInfo[] { (MethodInfo) methodof(<>f__AnonymousType0<User, Order>.get_user, 
       <>f__AnonymousType0<User, Order>), 
       (MethodInfo) methodof(<>f__AnonymousType0<User, Order>.get_order, 
       <>f__AnonymousType0<User, Order>) }), 
       new ParameterExpression[] { CS$0$0000, CS$0$0002 }))
.Where(Expression.Lambda(Expression.GreaterThan(
  Expression.Property(Expression.Property(
    CS$0$0000 = Expression.Parameter(typeof(<>f__AnonymousType0<User, Order>),
     "<>h__TransparentIdentifier0"),
      (MethodInfo) methodof(<>f__AnonymousType0<User, Order>.get_order,
       <>f__AnonymousType0<User, Order>)), (MethodInfo) methodof(Order.get_Total)), 
       Expression.Convert(Expression.Constant(10, typeof(int)), typeof(int?))), 
       new ParameterExpression[] { CS$0$0000 }))
        .Select(Expression.Lambda(Expression.Property(
          CS$0$0000 = Expression.Parameter(typeof(<>f__AnonymousType0<User, Order>),
           "<>h__TransparentIdentifier0"), 
           (MethodInfo) methodof(<>f__AnonymousType0<User, Order>.get_user,
            <>f__AnonymousType0<User, Order>)), 
            new ParameterExpression[] { CS$0$0000 })).ToList<User>();

It is quite surprising how much code that the compiler actually generates. The reason for the amount of code above is because the LinqToSql table class implements IQueryable. When the compiler detects this interface instead of generating IL that performs the query it will generate IL that builds up an expression tree that describes the query.

So in order to understand what the above code actually describes I wrote the same Linq query but instead of querying LinqToSql I queried a normal .NET collection. The code that Reflector generates now looks like this:

List<User> users = new List<User>();
if (CS$<>9__CachedAnonymousMethodDelegate6 == null)
{
    CS$<>9__CachedAnonymousMethodDelegate6 = delegate (User usr) {
        return usr.Orders;
    };
}
if (CS$<>9__CachedAnonymousMethodDelegate7 == null)
{
    CS$<>9__CachedAnonymousMethodDelegate7 = delegate (User usr, Order ordr) {
        return new { usr = usr, ordr = ordr };
    };
}
if (CS$<>9__CachedAnonymousMethodDelegate8 == null)
{
    CS$<>9__CachedAnonymousMethodDelegate8 = delegate (<>f__AnonymousType0<User, Order> <>h__TransparentIdentifier0) {
        return <>h__TransparentIdentifier0.ordr.Total > 10;
    };
}
if (CS$<>9__CachedAnonymousMethodDelegate9 == null)
{
    CS$<>9__CachedAnonymousMethodDelegate9 = delegate (<>f__AnonymousType0<User, Order> <>h__TransparentIdentifier0) {
        return <>h__TransparentIdentifier0.usr;
    };
}
IEnumerable<User> usersWithBigOrders = users
  .SelectMany(CS$<>9__CachedAnonymousMethodDelegate6, CS$<>9__CachedAnonymousMethodDelegate7)
  .Where(CS$<>9__CachedAnonymousMethodDelegate8)
  .Select(CS$<>9__CachedAnonymousMethodDelegate9);

This is a lot more understandable, but still it took a while to figure out exactly what the above code was doing. Here are both queries, one using Linq syntax the other using the lambda syntax, they are equivalent.

var usersWithBigOrders =
  from usr in context.Users
  from ordr in usr.Orders
  where ordr.Total > 10
  select usr;

var usersWithBigOrders = context.Users
  .SelectMany(user => user.Orders,(user, order) => new {User = user, Order = order})
  .Where(anonType => anonType.Order.Total > 10)
  .Select(anonType => anonType.User);

One of the biggest strengths of O/R mappers is that many have object-oriented "query by criteria" API. I will show what I mean by that in a bit. But to explain why I think a criteria API is such a great thing I will show the alternatives first.

Lets take a common search form scenario where a user can specify a number of customer filtering options like date added, name, the customer should have an address, the customer should be linked to a specific salesman, etc. If you were forced to use a stored procedure how would you implement this query?

I have seen a couple of solutions to problems like this, here is one that is really bad:

IF @name = ''
BEGIN 
     --- the whole query for this case 
END    
ELSE IF @name <> '' AND @shouldHaveAddress IS NOT NULL 
BEGIN 
    --- the whole query for this case 
END 
ELSE IF @name <> '' AND @shouldHaveAddress IS NOT NULL AND @salesmanNr <> '' 
    --- the whole query for this case 
END

Here the whole query is duplicated for each possible parameter mutation. I have seen a stored procedure like this recently that duplicated a very long query about 10 times (with very small variations in the where and join clauses).

Another solution is to complicate the query with embedded IFs, CASE and additional OR statements. But this solution does not only make the query unintelligible but also I think has some limitations. The solution that I see many arrive at is to build the query using string concatenation.

SET @sqlSelect = 'SELECT distinct ' + @NEWLINE + ' ' + @returnColumns + @NEWLINE
SET @sqlFrom = ' FROM Customers cust' + @NEWLINE
IF @companies is NOT NULL
BEGIN
  SET @sqlFROM = @sqlFROM + ' JOIN Companies comp on comp.Id=cust.CompanyId' + @NEWLINE
  SET @sqlFROM = @sqlFROM + ' LEFT JOIN Addresses addr on addr.Id=cust.AddressId ' + @NEWLINE  
END
IF @salesmanId IS NOT NULL
BEGIN
  SET @sqlFROM = @sqlFROM + ' JOIN Salesmen sal on sal.Id=' + @salesmanId 
END
...
..
.

I actually think that building up the query like this is not all that bad, however I would do it in C# and skip the stored procedure. The reason I prefer this approach is that it there is less duplication and because it is somewhat easier to maintain. But it is far from what we want!

So how do can we handle this via the nhibernate query api?

ICriteria query = session.CreateCriteria(typeof(Employee));

if (searchOptions.FirstName != null)
{
  query.Add(Expression.Eq("FirstName", searchOptions.FirstName));
}

if (!searchOptions.LastName != null)
{
  query.Add(Expression.Eq("LastName", searchOptions.LastName));
}

if (searchOptions.PhoneNumber != null)
{
  query.CreateCriteria("PhoneNumbers")
    .Add(Expression.Like("Number", searchOptions.PhoneNumber + "%"));
}
return query.List<Employee>();

The nhibernate criteria api might be verbose for simple scenarios compared to a SQL query in a string, but it is scenarios like the above that it really shines :) But wait it can be better. If you use Ayende's great NHibernate Query Generator (NHQ) you can do this:

QueryBuilder<Employee> query = new QueryBuilder<Employee>();

if (options.FirstName != null)
{
  query &= Where.Employee.FirstName == options.FirstName;
}

if (options.LastName != null)
{
  query &= Where.Employee.LastName == options.LastName;
}

if (options.PhoneNumber != null)
{
  query &= Where.Employee.PhoneNumbers.Number.Like(options.PhoneNumber, MatchMode.Start);
}            

return Repository<Employee>.FindAll(query, OrderBy.Employee.LastName.Desc);

NHQ is a very clever code-gen util that you setup as a post-build step. It generates the Where/QueryBuilder classes from the nhibernate mapping files. The result is that you do not need any strings in your queries! More commonly you use it like this:

return Repository<User>.FindAll(Where.User.Name == name);

I think it comes very close to using LINQ. So to end this post I guess I need to show how the above query would look with LINQ, because LINQ to SQL handles the above scenario pretty well.

IQueryable<Employee> query = linqContext.Employees;

if (options.FirstName != null)
{
  query = query.Where(emp => emp.FirstName == options.FirstName);
}

if (options.LastName != null)
{
  query = employees.Where(emp => emp.LastName == options.LastName);
}

if (options.PhoneNumber != null)
{
  query = from emp in query
	  from phoneNr in emp.PhoneNumbers
	  where phoneNr.Number.StartsWith(options.PhoneNumber)
	  select emp;
}            

return employees.ToList();

The above code is nice, but I think querying across relations is handled nicer in the nhibernate critiera api especially when using NHQ. Well that is all, now you know why I think a criteria API is are one of the mayor reasons to use an O/R mapper.

I don't use NHibernate for everything, so when I do use the ADO.NET API directly I like to use a small static utility class that lets me do this:

Db.Transaction(delegate(SqlCommand cmd)
{
    cmd.CommandText = "DELETE FROM LogEntries WHERE DateCreated < @DateCreated";
    cmd.Parameters.AddWithValue("@DateCreated", DateTime.Now.Subtract(TimeSpan.FromDays(30)));
    cmd.ExecuteNonQuery();
});
This simple static method takes care of so much of the repetative code that you never want to write twice, like setting up the connection, registering the transaction, handling the try catch and commit/rollback. Here is the code for the Transaction method:
public static void Transaction(SqlCommandHandler handler)
{
    using (SqlConnection connection = new SqlConnection(Settings.CommonDb))
    {
        connection.Open();

        SqlTransaction tx = connection.BeginTransaction(IsolationLevel.ReadCommitted);
        try
        {
            using (SqlCommand cmd = connection.CreateCommand())
            {
                cmd.Transaction = tx;
                handler(cmd);
            }
            tx.Commit();
        }
        catch
        {
            tx.Rollback();
            throw;
        }
    }
}

The C# language became a lot more power in the 2.0 update when anonymous methods (i.e. closures) were introduced, and the 3.0 update that introduced the lambda syntax made it even better. If it weren't for these new features of C# I would probably be compelled to move to ruby!

There are a number of inversion of control containers out there so I thought it would be an interesting experiment to do a simple benchmark. There are different ways that one can instantiate a type in .NET, for example via the new operator, Activator, GetUninitializedObject and Dynamic Method. The performance difference between these methods are in some cases quite high, maybe the same is true for these IoC containers? Granted IoC containers do more than just create objects so other factors will probably play a big role in the results.

So here are the contestants:

I have been using Castle Windsor since 2005 and I think it is the best of the bunch, so I guess I am unconsciously biased toward Windsor.  However I will try to make this benchmark as objective as I can.

The scenario for this test:

  • Have each IoC container resolve a UserController 1000 000 times
  • The UserController will have two constructor dependencies
  • Run the test with transient (new instance for each resolve) and singleton components

The UserController looks like this:

public class UserController
{
    private IUserRepository repository;
    private IAuthentificationService authService;

    public UserController(IUserRepository repository, IAuthentificationService authService)
    {
        this.repository = repository;
        this.authService = authService;
    }
}

I have also a general container interface that the benchmark engine will use. Each container will implement this interface.

public interface IContainer
{
    string Name { get; }

    T Resolve<T>();
    
    void SetupForTransientTest();
    void SetupForSingletonTest();
}

All tests used the latest released version of each library. Before you interpret these charts please observe that the measurement is for one million component resolves which means the actual time difference between each container is actually very small.

Here are the results when all components were setup as singletons:

IoCSingleton

Here are the results when all components were setup as transient:

IoCTransient

So what does these charts tell us? Lets take the biggest difference in the transient case, Spring.NET took 44.149 seconds and Unity took 8.164 seconds, what is the actual difference when resolving a single instance?

  Spring.NET : 44.149 / 1000000 = 0.000044149 seconds
  Unity      :  8.164 / 1000000 = 0.000008164 seconds

So the actual difference is only about 36 microseconds. Another way to put these values into perspective is to compare against the new operator. I created a NewOperatorContainer with a resolve method that looks like this:

public T Resolve<T>()
{
    object o = new UserController(new LdapUserRepository(), new DefaultAuthentificationService());
    return (T) o;
}

OK, comparing the above with an inversion of control container is like comparing apples to oranges, an IoC handles so much more than just object creation. Also an IoC cannot use the new operator directly but must use one of the other methods. My guess is that all IoC containers in this test uses an approach which involve IL Generation which if cashed comes close to using the new operator directly. Anyway I think it will show just how small the difference between the real IoC containers are. In order to visualize this I needed to invert the values so that high means fast and low means slow.

IoCInversed

Update: The above chart can be very misleading. The x-axis is not seconds but 1/s. I hope it shows that the difference between the containers are very small compared to instantiating the objects manually.

OK, can we draw any conclusion from the test? Well I think we can say that performance should not be an issue when choosing one of these IoC containers. The difference is too small. When you choose which container to use you should consider other aspects, like how invasive the container is to they way you want to work.

For the complete code: IoCBenchmark.zip

The new statistics class in Hibernate 2.0 is a great tool to use for monitoring potential performance problems with your app. The class exposes a large number of counters and measurements. Here are some highlights of what is available:

  • EntityDeleteCount
  • EntityInsertCount
  • EntityLoadCount
  • EntityUpdateCount
  • QueryExecutionCount
  • QueryExecutionMaxTime
  • QueryExecutionMaxTimeQueryString
  • QueryCacheHitCount
  • ConnectCount
  • SessionCloseCount
  • SessionOpenCount
  • CollectionLoadCount
  • CollectionUpdateCount
  • CollectionRemoveCount
  • Queries
  • PrepareStatementCount

The statistics engine can be turned on/off dynamically with the IsStatisticsEnabled property and you can get statistics on specific entities as well.

Just to try this out I created a MonoRail filter that checks for "nhibstats" in the query parameters of the request. If the parameter is found it will turn on the nhibernate statistics, this is done before the controller action is executed. After the action as completed it will add the stats data to the PropertyBag so the view can access it.

public class NHibernateStatsFilter : IFilter
{
    public bool Perform(ExecuteWhen exec, IEngineContext context, IController controller,
                        IControllerContext controllerContext)
    {
        if (context.Request["nhibstats"] == null)
            return; 
        
        if (exec == ExecuteWhen.BeforeAction)
        {
            sessionFactory.Statistics.Clear();
            sessionFactory.Statistics.IsStatisticsEnabled = true;
        }
        else if (exec == ExecuteWhen.AfterAction)
        {
            controllerContext.PropertyBag["nhibernate_stats"] = sessionFactory.Statistics;
            sessionFactory.Statistics.IsStatisticsEnabled = false;
        }
    }
}

I use the same filter for both the before and after action "events", this means that you have to specify the filter attribute like this:
[Filter(ExecuteWhen.BeforeAction | ExecuteWhen.AfterAction, typeof(NHibernateStatsFilter))]
public class BaseController : Controller
{
        
}
Now you can add to your top layout view something like this:
<?brail if IsDefined("nhibernate_stats"): ?>
    <dl>
        <dt>EntityInsertCount</dt>
        <dd>${nhibernate_stats.EntityInsertCount}</dd>        
        <dt>EntityUpdateCount</dt>
        <dd>${nhibernate_stats.EntityUpdateCount}</dd>        
        .
        ..
        ...
    </dl>
<?brail end ?>

I think this could be really useful. Now you don't have to open SQL Profiler whenever you want to know what NHibernate is up to!

It took some time but Slick Code Search is now finally published on codeplex. There is a binary release available on codeplex and the source is available via Google's subversion hosting: http://slickcodesearch.googlecode.com/svn/trunk/

So what is this thing? Well it is a small WPF application that lets you index and search through your source files (C# only for now), in short a Google desktop for your local code.

slickcode3

When you start it up you get this floating textbox where you can type in a Lucene search query. Your can search for both type and method names. Currently you need to press the enter key to execute the search and update the result list.

Example Lucene queries:

  • t:ISessi*   Searching for a type beginning with "ISessi"
  • m:Get*     Searching for all types that have a method beginning with "Get"

For a complete description of the Lucene syntax: http://lucene.apache.org/java/docs/queryparsersyntax.html

The result list looks like this:

slickcode4

You can navigate the result list by using the up/down keys. To expand/collapse an item just press the left/right keys. Once an item is expanded the type's methods will be listed and you can now navigate those with the up/down keys. If you press right when a method is highlighted a code window will appear with that method in focus.

You can also press the enter key while having a result item highlighted, this will open that file in a program you can specify in the options dialog.

slickcode5

The above screenshot shows how an item looks while it is expanded. If anyone has any good ideas for new features then please add them in the issue tracker on codeplex. I think this app needs something more to be really useful.

The code for this app is a little strange. It started as a "learn WPF demo" app that grew to something more. Then I tried a model view presenter pattern with castle windsor integration just for fun, but this refactoring is more like an afterthought and it shows in the code.

I actually started this app along time ago after seeing a screencast by Ayende where he developed a code search engine. Some of the basics of what he did in that cast can be found in Slick Code Search (a SharpDevelop library that handles the CSharp file parsing and Lucene.NET for the text searching).

NHibernate 2.0 Alpha1 was released a few days ago and the 2.0 version brings a lot of new features. There is a completly new event / listener system that can be used for all sorts of cross-cutting concerns. A basic example would be to register a save/update listener that would update some common fields (like ModifiedBy, ModifiedDate).

In order to use the event system you need to create a class that either inherits from a default nhibernate event listener or from the many event interfaces. Here is an example:

using NHibernate.Event;
using NHibernate.Event.Default;

public class CustomSaveEventListener : DefaultSaveEventListener
{
    protected override object PerformSaveOrUpdate(SaveOrUpdateEvent evt)
    {
        IEntity entity = evt.Entity as IEntity;
        if (entity != null)
            ProcessEntityBeforeInsert(entity);

        return base.PerformSaveOrUpdate(evt);
    } 

    internal virtual void ProcessEntityBeforeInsert(IEntity entity)
    {
        User user = (User) Thread.CurrentPrincipal;
        entity.CreatedBy = user.UserName;
        entity.ModifiedBy = user.UserName;
        entity.CreatedDate = DateTime.Now;
        entity.ModifiedDate = DateTime.Now;
    }
}

The reason I inherit from the default implementation is that when you specify a listener in the configuration you will replace the default listener. I am not sure what the recommended way to this is but the way I understand it is that you can choose two ways, iether you inherit a default implementation and only specify your listener in the configuration or you just implement the event interface(ISaveOrUpdateEventListener for example) but then you also need to specify the default implementation in the configuration (in order not to loose functionality).

Here is how the configuration looks like:
<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2">
    <session-factory>
        ...
        <listener class="NHibTest.Listeners.CustomSaveEventListener, NHibTest" type="save" />
        <listener class="NHibTest.Listeners.CustomSaveEventListener, NHibTest" type="save-update" />                    
    </session-factory>
</hibernate-configuration>

You can also place the listeners within a event xml element if you want to setup many listeners of the same type. It is also possible to do this via code like this:

Configuration cfg = new Configuration();
cfg.EventListeners.SaveEventListeners = 
    new ISaveOrUpdateEventListener[] {new CustomSaveEventListener() };

There are many other event types, for example Load, PreUpdate, DirtyCheck, Autoflush, PostDelete. These events could be used in interesting scenarios, for example validation and security.

For a previous project I extended the IoC container Castle Windsor to support dynamic component selection. In this case it was extended to select the correct implementation based on the current user type. I even wrote a CodeProject article about it, but for some reason I never published the article. Well this morning I did submit it and it can be found here. I will try to outline the changes I did in this post as well.

Why do you need dynamic component selection? Well imagine you are going to develop a web application that is used by B2C (Consumer), B2P (Partner) and B2E (Employee) users. The B2C users has their profile and password in a local database, the B2P users profile can only be accessed via an external third party web service and the B2E users are in an LDAP server.

In this kind of scenario you would do an interface for a user repository and then an implementation for each user type. If this web application serves all these user types from a single instance then you need a factory to return the correct implementation based on the current user type. If you have many different scenarios where the implementation will be different based on some common runtime condition then it would be nice to declare that (for example in a xml configuration file) and then have the inversion of control container dynamically select the correct component. This way you would not need to create a bunch of almost identical factories for each scenario.

So what we want to do is basically state in the windsor configuration which implementation coresponds to which user type.

<components>
    <component	id="b2e.user.repository"
		service="ExtendingWindsor.IUserRepository, ExtendingWindsor"
		type="ExtendingWindsor.EmployeeUserRepository, ExtendingWindsor" 
		channel="B2E" />
    
    <component	id="b2c.user.repository"
		service="ExtendingWindsor.IUserRepository, ExtendingWindsor"
		type="ExtendingWindsor.ConsumerUserRepository, ExtendingWindsor" 
		channel="B2C" />

    <component	id="b2p.user.repository"
		service="ExtendingWindsor.IUserRepository, ExtendingWindsor"
		type="ExtendingWindsor.PartnerUserRepository, ExtendingWindsor" 
		channel="B2P" />
</components>

The channel attribute in the above xml is not something that exists in the standard Castle Windsor configuration schema. However Castle Windsor allows for additional attributes so the only thing we need to do now is to extend the container so that when it searches for a implementation of the IUserRepository interface it will take the correct one.

One way to acheive this is to exchange the default naming subsystem with one that inherits from DefaultNamingSubSystem and change the behavior of the GetHandler method.

public class ExtendedNamingSubSystem : DefaultNamingSubSystem
{
    public override IHandler GetHandler(Type service)
    {
        IHandler[] handlers = base.GetHandlers(service);

        if (handlers.Length < 2)
            return base.GetHandler(service);

        UserPrincipal user = (UserPrincipal) Thread.CurrentPrincipal;

        foreach (IHandler handler in handlers)
        {
            string channel = handler.ComponentModel.Configuration.Attributes["channel"];
            if (channel == user.Channel)
            {
                return handler;
            }
        }

        // use default
        return base.GetHandler(service);
    }
}

What we need to do now is to replace the default NamingSubSystem with our own. To exchange the NamingSubSystem of the underlying kernel in Castle Windsor is a little more problematic than it should. We need to use the constructur that takes in an implementation of IKernel.

This is what we have to do:

IKernel kernel = new DefaultKernel();
kernel.AddSubSystem(SubSystemConstants.NamingKey, new ExtendedNamingSubSystem());
XmlInterpreter interpreter = new XmlInterpreter();
DefaultComponentInstaller installer = new DefaultComponentInstaller();

interpreter.Kernel = kernel;
interpreter.ProcessResource(interpreter.Source, kernel.ConfigurationStore);

WindsorContainer container = new WindsorContainer(kernel, installer);
container.Installer.SetUp(container, kernel.ConfigurationStore);

This is setup is a little more complex than what you normaly do:

WindsorContainer container = new WindsorContainer(new XmlInterpreter())

But there is no constructor that takes an IKernel and a IConfigurationInterpreter. I have thought about submitting such a constructor as a patch to the castle team but have not gotten around to it. Anyway now that we are done we can do this:

IUserRepository repos = container.Resolve<IUserRepository>();

The container will dynamically select the correct component based on the current principal. This also works if you have a component that has a constructor dependency to the IUserRepository. You have think about the lifestyle though so that you do not store a reference to a IUserRepository implementation in a singleton component.

Please observe that the above implementation of ExtendedNamingSubSystem is just a proof of concept implementation. For a real implementation you should try to reduce the need to call base.GetHandlers(service).

Update: Instead of extending the DefaultNamingSubSystem yourself you can use the KeySearchNamingSubsystem which is included in the Castle Windsor release. This allows you to use meta data inserted in the component id to do dynamic component selection. 

For a current project I decided to use MSBuild (for various reasons). The build script is used to automate a build and delivery process that was previously handled manually. I have worked a lot with build scripts during the last few years but I have only used NAnt, so it was fun to learn and try something new.

Although MSBuild is a different build framework it shares some fundamental concepts with NAnt (xml based, targets and dependencies, etc.). After some googling I did not find any real-world MSBuild example, but I happened to know that Rhino Commons is one of the VERY few open source projects that use MSBuild and it was a great reference for a good structured build file.  

The Good: One of the nice things that you can do in MSBuild is declare data by using ItemGroups, the data can contain meta data and that meta data can be consumed and used by tasks. Here is an example:

<ProgramFiles Include="$(OutDir)_PublishedWebsites\WebApp\**\*">
  <PackageDir>www\site.com\WebApp</PackageDir>
  <DevelopmentDir>\\integration.int\d$\Inetpub\WebApp</DevelopmentDir>            
</ProgramFiles>
...
<Target Name="PushToIntegration" DependsOnTargets="CreatePackage">        
    <Exec Command="xcopy /E /Y /D &quot;$(ReleaseDir)\%(ProgramFiles.PackageDir)&quot; &quot;%(ProgramFiles.DevelopmentDir)&quot;" />
</Target>

The Bad: Properties and ItemGroups are evaluated when the build scripts starts executing. In order to create properties that is based on some dynamic data (like time) you have to use tasks and output parameters like this:

<Time Format="yyyy-MM-dd HH:mm">
    <Output TaskParameter="FormattedTime" PropertyName="BuildTime" />
</Time>

This is very annoying. The syntax is even more cumbersome when you have to create dynamic ItemGroup data by using the CreateItem task. I am really missing the NAnt helper functions that can be used inside property values and conditionals. Example:

<property name="machine" value="${environment::get-machine-name()}" />
...
<if test="${target::exists(machine)}"> 
  <call target="${machine}"/>
</if>

I guess if I would choose between MSBuild and NAnt for a personal project I would probably choose NAnt, but it is not a clear winner, they both have their strengths and weaknesses.

Here are some other comparisons:

I really like the new component registration API in Castle Windsor, if you compile the trunk yourself with Visual Studio 2008 you also get some .NET 3.5 only methods that uses lambda functions. These new functions make the API really powerfull.

// Register services
Container.Register(AllTypes.Pick()
.FromAssembly(Assembly.GetExecutingAssembly())
.If(type => type.Namespace == typeof(ISearchService).Namespace)
.WithService.FirstInterface());

This registers all classes that exists in the same namespace as the ISearchService interface. When windsor registers the classes it will take the first interface as the main interface (the one you can use to get the component). The nice this is that this can be used together with the xml based configuration since only components that are not already registered will be added. This gives you a change to later change a component via the xml configuration.

Ayende has been posting a lot about automatic component registration using Binsor windsor configuration scripts. The programming experience when all components are registered and hooked up automatically is really nice. It lets you focus on the important stuff.

I have been working on a WPF application lately and today I came across a rather common scenario where I needed to do stuff in a background thread (in order to not lock the UI). In this scenario I needed to do indexing, during the indexing I want the UI to be updated with progress (what file is being index, how many has been indexed, etc). I handled this by having the indexing service expose some progress events that the UI could subscribe to, like this:

private void OnStartIndexing()
{
    var indexer = new FileIndexer();
    indexer.IndexingFile += IndexingFileCallback;
    indexer.IndexingCompleted += IndexingCompletedCallback;
    indexer.SavingIndexStarted += IndexerSavingIndexStarted;

    ThreadPool.QueueUserWorkItem(x => { indexer.Run(); });
}

protected void IndexingCompletedCallback()
{
    View.HideInfoText();
}

The problem here is that the function IndexingCompletedCallback is not being called in the UI thread and will throw an exception. To solve this you have to use the Dispatcher like this:

Action action = delegate()
{
    Views.HideInfo();
};
Dispatcher.BeginInvoke(DispatcherPriority.Normal, action);

When I saw this I thought that this could be handled in a more general declarative way by using function attributes and AOP. So I tried to get this to work:

[UseDispatcher]
protected virtual void IndexingCompletedCallback()
{
    View.HideInfoText();
}

The fun part was that it was very easy to do, since I already used Castle Windsor for my presenters it was just a matter of configuring my base presenter class with an interceptor and in the interceptor it was just a matter of calling the invocation.Proceed() in a delegate passed to the Dispatcher.

Just for completeness here is the code for the base presenter:
[Interceptor(typeof(PresenterInterceptor))]
public abstract class Presenter<T>
{
    public T View { get; private set; }
    
    public void Wireup(T view)
    {
        View = view;
        Initialize();
    }

    protected abstract void Initialize();
}
And here is the interceptor:
public class PresenterInterceptor : IInterceptor
{
    private IDispatcher dispatcher;

    public PresenterInterceptor(IDispatcher dispatcher)
    {
        this.dispatcher = dispatcher;
    }

    public void Intercept(IInvocation invocation)
    {
        object[] attributes = invocation.Method.GetCustomAttributes(typeof (UseDispatcherAttribute), true);
        
        if (attributes.Length == 0)
        {
            invocation.Proceed();
            return;
        }

        dispatcher.RunInUI(invocation.Proceed);
    }
}

Okey so after all that I now discovered that the BackgroundWorker might have been a good choise for this scenario since it has progress event that seems to automatically be executed in the UI thread, I am not sure yet. I will have to try it and do another blog post about it. But I still feel that this solution is nice and easy, and if you use Castle Windsor it is a quick thing to implement.

Ok, maybe a little to much code for a first blogpost, but I named the blog slickcode so why not :)