I find that .NET applications are often split into way to many assemblies. I am guilty of doing this myself, but lately I have been using namespace separation to a larger extent. There are pros and cons to both methods, for example assemblies allows easy dependency cycle detection, on-demand loading, stricter encapsulation (internal keyword). But multiple project solutions also slows down compilation considerably, makes visual studio + plugins (resharper) less responsive.

For a more detailed list of pros and cons Patrick Smacchia has a great article on controlling dependencies. My main objection to some multi-project solutions is that many of the assemblies are unnecessary, the added flexibility and dependency control is not needed and just adds complexity.

An example of a solution structure containing to many assemblies:

  • App.Services
  • App.Services.DataContracts
  • App.Services.ServiceContracts
  • App.Services.MessageContracts
  • App.Services.TypeConverters
  • App.Services.Validators

I reviewed a solution recently containing 31 projects, the list above is similar to what I found in this solution. About half of the 31 projects only had 2-3 types in them! I remember seeing the same in the web service software factory templates, what is the point of having data contracts, service contracts and message contracts in separate assemblies? Sure this could be required in some special edge case, but unless you absolutely need to I don't see the point.

Controlling dependencies

The standard build tools have built in dependency cycle checks for assemblies but not for namespaces. However NDepend can analyse dependencies within namespaces, here is a view of the namespace dependencies within Castle.MonoRail.Framework.

image

The dependency matrix generated by NDepend can be a little hard to interpret, at least I thought so at first. Each cell shows the number of types used by another namespace.

  • Green cell means that there is a one directional dependency from the horizontal to the vertical namespace
  • Blue cell means the opposite, that is a one directional dependency from the vertical to the horizontal namespace
  • Black means a bi-directional dependency

So, simple rule, black cell bad, green/blue good. Well if was only that simple. If you have concrete implementations of interfaces in the same namespace as the interface you will have loosely coupled code, but a potential bidirectional dependency on the namespace. The solution is to place interfaces and implementations in separate namespaces. I find this useful sometimes but I don't think is it should be a general rule.

You can also view indirect dependencies in NDepend, which can highlight cyclic dependencies between namespaces:

image

Here we see that almost every namespace is part of a cyclic dependency. You can view what namespaces are involved by left clicking on a cell:

image

I am not passing judgement on the MonoRail code, I have studied and been inspired by the source code in Castle Windsor, MicroKernel and MonoRail for years, they contain some of the best written/designed .NET software I have seen.

The reason for the namespace dependencies in MonoRail are mostly caused by having interfaces and their implementations in the same namespace, for example a lot of concrete types in Castle.MonoRail.Framework reference types in deeper namespaces that in turn reference abstract types defined in Castle.MonoRail.Framework. 

NDepend CQL

NDepend has this very powerful SQL like query language that can pinpoint design issues within your code. And as Jim Bolla pointed out you can create CQL queries that check that your code is not breaking separation of concern.

WARN IF Count > 0 IN SELECT METHODS WHERE IsUsing "System.Web" AND 
	IsUsing "System.Data.SqlClient"

This query selects all methods that use types from System.Web and System.Data.SqlClient, if this query returns a method you know there is something wrong! You can also create queries that check for bidirectional namespace dependencies:

WARN IF Count > 0 IN SELECT NAMESPACES WHERE
IsDirectlyUsing "MyNamespace" AND IsDirectlyUsedBy "MyNamespace"

These queries can define constraints that you then can use in your build process. NDepend can at first be a little daunting because it can extract so much information that there is risk for information overload, but it can be a really powerful tool for maintaining code quality and as an aid for refactoring. 

I have found that my controller tests usually contain some repetitive code for checking the ActionResult type, for example doing a safe typecast to ViewResult and asserting a notnull, checking the viewName, ViewData.Model type, etc. Some extension methods for the ActionResult and the ViewResult classes can take care of this.

Example:
[Fact]
public void Calling_ViewHistory_Should_ShowHistoryList()
{
    indexRepos.Expect(x => x.GetChangesets("SvnRepos", "/trunk", 5, 10))
        .Return(testData.SvnChangesets);

    var model = controller.ViewHistory("SvnRepos", "trunk", 5)
        .ReturnsViewResult()
        .ForView("History")
        .WithViewModel<HistoryViewModel>();            
               
    Assert.NotEmpty(model.Changesets);
    
    indexRepos.VerifyAllExpectations();
}

ReturnsViewResult asserts that the returned ActionResult is actually a ViewResult, ForView asserts that the ViewResult has the correct viewName set, WithViewModel asserts that the ViewData.Model has the specified type and returns the model.

Here is the code for the extension methods:
public static ViewResult ReturnsViewResult(this ActionResult result)
{
    ViewResult result = result as ViewResult;
    Assert.NotNull(result);
    return result;
}

public static ViewResult ForView(this ViewResult result, string viewName)
{
    Assert.Equal(viewName, result.ViewName);
    return result;
}

public static ViewResult ForMaster(this ViewResult result, string masterName)
{
    Assert.Equal(masterName, result.MasterName);
    return result;
}

public static ContentResult ReturnsContentResult(this ActionResult result)
{
    var contentResult = result as ContentResult;
    Assert.NotNull(contentResult);
    return contentResult;
}

public static TViewModel WithViewModel<TViewModel>(this ViewResult result) where TViewModel : class
{
    TViewModel viewData = result.ViewData.Model as TViewModel;
    Assert.NotNull(viewData);
    return viewData;
}

public static TType WithViewData<TType>(this ViewResult result, string key) where TType : class
{
    TType viewData = result.ViewData[key] as TType;
    Assert.NotNull(viewData);
    return viewData;
}
The idea for this is taken from browsing the source of the Suteki Shop, a very nicely coded ASP.NET MVC app. I only changed some of the method names. I also found a very simple but nice little extension method to the string class:
public static string With(this string format, params object[] args)
{
    return string.Format(format, args);
}

//...

string msg = "First name: {0}, Last name {1}".With(user.FirstName, user.LastName);

Simple, nice and elegant :)

On a completely unrelated topic, Olson Jeffery has gotten together a the first release of Boo Lang Studio, it still a pretty ruff alpha version, if you try it please raise your issues in the codeplex issue tracker!

I am not a big fan of frame/iframe solutions but iframes are sometimes necessary if you need to incorporate a legacy application within a new portal/application.

One of my main problems with iframes is that the address bar does not change when you navigate to a sub page within the iframe. This means that you cannot copy the current url from the address bar or bookmark the page and expect to see the same page when you revisit the bookmark.

I thought I would try to work around these problems using the url routing engine in ASP.NET MVC. I started with this url routing rule:

routes.MapRoute("FramedApp",
                "framedApp/{*page}",
                 new { controller = "FramedApp", action = "ViewPage", page = "" }); 

Here I use a wildcard rule to capture the complete url after "framedApp/" and pass in that url to the ViewPage method that looks like this:

public ActionResult ViewPage(string page)
{
    ViewData["page"] = page;
    return View();
}

Not much going on here, I just simply pass the captured page url to the view. The view can then use this url to generate the src attribute for the iframe:

<viewdata page="string" />

<div class="framed-application">
    <iframe src="http://localhost/legacyApp/${page}" frameborder="0"></iframe>    
</div>

So what has this accomplished? Well now we can directly navigate to a sub page within the framed legacy app by a simple url syntax, for example: /framedApp/urlFor/legacyPage.aspx. This will make linking to specific pages in the legacy app a lot easier but there is still one big issue left. The url is not going to change when you move from legacyPage.aspx to legacyPage2.aspx, so you still cannot bookmark some pages.

This problem is a little problematic to work around, my first thought was to change the url by javascript everytime you load a page in the iframe, so if you click on a link to go to legacyPage2.aspx (from within the iframe) you could update the address bar to reflect the new page. However you cannot from javascript change the browser url without causing the browser to reload the page, which is not something we want to do. What you can do from javascript is to set the window location hash, the hash is the text after the "#" . The text after the hash sign is normally used to make the browser jump to a specific named element.

The fact that the text after the hash sign can be set by javascript is used to by many ajax applications to support bookmarking and browser history (back button). For example if you view a label in gmail the browser url looks like this:

image

The problem though is that the text after the hash sign is not sent to the server so the url routing solution I tested above will not work. I need to change it into a javascript solution.

Here is what I ended up with (using JQuery):

$(document).ready(function() {
  if (window.location.hash.length > 0)
  {
    var iframe = $('iframe').get(0);        
    iframe.src = "http://localhost/legacyApp/" + window.location.hash.substring(1);    
  }
        
  $('iframe').load(function() {
    var page = this.contentDocument.baseURI.substring("http://localhost/legacyApp/".length);
    window.location.hash = page;            
  });
});

What I do here is on document load I check if the browser url contains any hash argument, if it does I take the value and append it to the legacyApp url. I also hookup the iframe load event to update the hash when a new iframe page has loaded, for example after a you click a link to go to another page in the iframe. 

image

As you can see from the above screenshot the url now contains the url for the framed application, and it will be updated as you navigate the framed application. So bookmarking and copying the url to send to a friend will work. This makes working with web applications that use frames, at least for me a lot less frustrating as you can clearly see the url for the main page and the framed page. No need to right click This Frame->View Frame Info to find out what page you are currently viewing!

I discovered a nice feature in castle windsor yesterday (and a need for this feature).

Lets say you have the following classes:

public class TfsSourceControl : ISourceControl
{
  public TfsSourceControl(IRepositoryWebSvc websvc)
  {
    //...
  }
}

public class RepositoryWebSvc : IRepositoryWebSvc
{
  public class RepositoryWebSvc(IContext context, IFileCache cache)
  {
    //...
  }
}

The interface IContext represents in this scenario something that I want to pass in manually at the time of the component resolution, it could for example be a user context or some contextual settings, that is a dependency that cannot be automatically resolved. In Windsor you can pass in manual dependencies like this:

Hashtable arguments = new Hashtable();
arguments["context"] = myContext;
Container.Resolve<IRepositoryWebSvc>(arguments);

The code above will manually supply the dependency for IContext. What I did not know was that you can pass in arguments for sub dependencies as well, so this works:

Hashtable arguments = new Hashtable();
arguments["context"] = myContext;
Container.Resolve<ISourceControl>(arguments);

The dependency resolver in Windsor will carry with it the manual arguments for sub dependencies (as well as for sub-sub-etc dependencies). This enables you to request a top level component from windsor and pass in manual arguments that are needed by any component in the dependency tree. In the .NET 3.5 build of Castle Windsor you can use an anonymous type instead of a dictionary.

Container.Resolve<TfsSourceControl>(new {context = myContext});

Of course you should try to minimize the need to pass in manual arguments for dependencies, you want to utilize the auto dependency lookup as much as possible, but sometimes it can be very useful.

My current customer did a study of portal / content management systems (CMS) to replace their existing system. I read their reports and felt compelled to do a little googling and study myself. I haven't used a CMS before for any real application development, I have been to presentations about SiteCore and I have experimented with Umbraco and Sharepoint but that is about it. I think CMS systems can be a great tool for building portal / content sites with limited functionality but when the system your are meant to build is leaning more to an application with rich functionality my instinct is that most CMS systems add to much complexity and friction.

The reason for this is that the way you extend or add functionality to CMS / portal  systems is normally through portlets, webparts/UserControls, in short you add functionality to an already existing application. This ties the application you are building heavily to the CMS system and often force you to a certain way of development. It may require a lot of manual configuration and hacks to work around limitations in the CMS. I am afraid that such solutions will be hard to develop and maintain.

I feel that an alternative way to develop applications which require rich content management is to develop a standalone application that is only dependent on a content API, the CMS system is then only responsible for the editorial aspects of the content management. So I began to search for a CMS system that supported this and was quite surprised to find that in almost every CMS system it was very hard to separate and only use the content API in a standalone application.

I finally found N2 CMS, an open source CMS system developed by Cristian Libardo. N2 works like most CMS systems in that you develop your site in the N2 application which contains the functionality for content editing. But you can also easily define and consume content from a standalone application. The way you define your content is through regular .NET classes.

Example:

[Definition("Standard content page", "ContentPage")]
[WithEditableTitle("Title", 10, ContainerName = Tabs.Content)]
[WithEditableName("URL", 11, ContainerName = Tabs.Content)]
[AllowedChildren(typeof(InfoBox))]
public class HelpPage : AbstractContentPage
{
    [EditableFreeTextArea("BodyText", 13, ContainerName = Tabs.Content)]
    public virtual string BodyText
    {
        get { return (string)(GetDetail("BodyText") ?? string.Empty); }
        set { SetDetail("BodyText", value, string.Empty); }
    }

    [EditableTextBox("Meny name", 12, ContainerName = Tabs.Content)]
    public virtual string ShortName
    {
        get { return (string)(GetDetail("ShortName") ?? string.Empty); }
        set { SetDetail("ShortName", value, string.Empty); }
    }
   
    [EditableChildren("Right column", Zones.RightColumn, 110, ContainerName = Tabs.RightColumn)]
    public virtual IList<InfoBox> InfoBoxes
    {
        get { return GetChildren<InfoBox>(Zones.RightColumn); }
    }
}

As you can see you adorn your classes and properties with attributes to inform the content editor how this content item should be edited. You have a lot of control over how to structure you content. After placing the assembly containing your content model in the bin folder of the N2 web application they will be available in the editor interface:

image

From your application it is then very easy to consume the content:

public ContentItem GetByContentPath(string path)
{
    return Context.Current.UrlParser.Parse(path);
}

The content path is the logical naming of your content structure, for example "/service/help/faq". I just want to point out that I am not using N2 the standard way, if you are using WebForms and follow the example site you get a more integrated experience and I don't think you have to use content parser directly like above. I found N2's use of .NET classes to define content as a really cleaver way to force separation between content and presentation and to allow for rich reuse of content in different scenarios.

I hope that the approach outlined above will prove to be a more natural and frictionless way to develop rich applications that require content management. You get the rich editing of content that you seek and at the same time get complete freedom in how you develop your application, for example you can use new frameworks and tools that would have otherwise been impossible. Your can use MonoRail, ASP.NET MVC, WebForms or WPF and you do not tie your application to a specific product.

I am easily impressed by the huge feature list of big CMS systems, counting all the available portlets and modules of existing functionality, but I think it is important to ask yourself what you really need. Companies behind CMS products are very good at selling them, showcasing all the extra "free" functionality that come with it and never the added complexity that comes with trying to extend and develop complex functionality into an existing application.

I posted a topic on CMS and application development on the altdotnet mailing list which has some interesting replies, but it would be great to get some more comments from anyone with experience doing rich application development using a portal/CMS system.