Sunday, December 13, 2009

Book Review: Enterprise Integration Patterns

eip I’ve just finished reading ‘Enterprise Integration Patterns’ by Gregor Hohpe and Bobby Woolf for the second time. I first read it when it was published back in 2004. At the time I was struggling with web application architecture, and so it wasn’t directly relevant to my work. I guess I took away some idea of how messaging systems could work, but mostly my thoughts were, ‘that’s interesting, I wonder who uses it’. I came to read it second time because I’m responsible for architecting an enterprise integration architecture, so now it’s directly relevant to my work.

Whatever you may think about EIP, its influence has been huge. I don’t think anyone can have an intelligent discussion about integration architecture without referencing it. It defined the vocabulary for a lot of SOA and if you read anything by the SOA industry gurus, you will constantly encounter citations from this book.

It’s pretty clear how Hohpe and Woolf think you should do enterprise integration; because although Chapter 2 describes various integration styles; File Transfer, Shared Database, Remote Procedure Invocation; the rest of the book is about messaging and messaging only. Indeed, that is probably the book’s most important legacy. EIP is one of the key reasons that messaging has now become the default style of enterprise integration.

So why do Hohpe and Woolf think that messaging is the answer? Like pretty much anything to do with software it’s all about coupling, or rather avoiding coupling. Of course this includes logical coupling; not making one application have to care about the internals of another; but you can achieve that with web services. What messaging gives us is temporal decoupling; by making all our integration asynchronous we no longer need all our components to be available all the time. Introducing a service bus and publish/subscribe messaging also removes the need for our applications to even care where the other components are, or even if there are any to communicate with.

The first three chapters are essential reading for anyone building business applications. Chapter 1 describes the integration issues that face many large organisations and describes how messaging might be used to to glue disparate applications together. Chapter 2 describes alternatives to messaging as explained above and why you shouldn’t use them. Chapter 3 introduces basic messaging concepts including channels, messages, pipes and filters, routing, transformation and endpoints. Together they are probably the best explanation of why you should use messaging and how to get started that you will find anywhere.

EIP is a patterns book. This means that it’s primary purpose is to define a vocabulary. However, in defining the vocabulary Hohpe and Woolf also provide a comprehensive cookbook of solutions to common integration patterns. Like all patterns books, you often think, ‘I’ve done that myself’, but now you have a name for it, know the alternatives and possible problems before you start. Being a patterns book also means that it can be a bit tedious to read from cover to cover. Each pattern has to stand independently and has the same layout: A name, an icon, the context, a problem statement, forces acting on it, a solution including a sketch and related patterns. This makes it excellent as a reference work, but leaves something to be desired if you are expecting more of a narrative. But even if you don’t fancy tackling all 600+ pages of patterns, I would still recommend anyone building business software to read the first three chapters.

The icons for the patterns are a great visual vocabulary, but having said that I haven’t seen them very widely used. It’s a shame because they are quite pretty. You can download a Visio template for them here. Here’s a completely meaningless EIP style diagram I just stuck together:

image

My only other criticism of the book is that the technology it describes is now five years out of date. In the same way that Martin Fowler’s ‘Patterns of Enterprise Application Architecture’ sometimes reads like a specification for NHibernate, EIP often reads like a description of NServiceBus or MassTransit. As a keen MassTransit user, I often read a pattern only to think, ‘well, MassTransit covers that one, I don’t have to worry about it’. Many of the patterns have been further refined in the meantime and you can’t talk about messaging these days without some reference to the Command Query Responsibility Segregation pattern. For that reason, you shouldn’t read this book in isolation, but rather as a foundation for further investigation.

Monday, December 07, 2009

Skills Matter Functional Programming Exchange

I had a great time today at the Functional Programming Exchange organised by Robert Pickering and Skills Matter. Robert managed to grab some really interesting speakers who gave a nice snapshot of the current art and use of FP. The whole caboodle was hosted in Skills Matter’s new London offices and they did a magnificent job; plenty of free tea, cakes, sandwiches and pizza. Geek heaven :)

Here’s a rundown of the talks:

Sadek Drobi – Computation Abstraction.
I was late getting the train up from Brighton and arrived half way through the first talk, which was a pity because in many ways it set the scene for the whole day. Sadek showed how functional programming allows abstractions that are not available in imperative languages. I really liked the discussion on error handling and how you can easily create your own control structures with higher-order functions. What was very nice was that he used a variety of Functional languages for his demos.

Matthew Sackman – Supercharged Rabbit: Resource Management at High Speed in Erlang
I really enjoyed this talk. Matthew took us on a whirlwind tour of RabbitMQ, an AMQP based messaging system implemented in Erlang. Apparently it can scale to queues as large as your disk space without sacrificing performance. Sounds very impressive. It was cool to hear about why Erlang makes such an excellent tool for writing highly concurrent software. It was also interesting to hear what Matthew disliked about Erlang in comparison with Haskell. I liked this quote, “Rabbit isn’t really fast, it can only manage 25,000 messages per second” … compared with MSMQ that is fast.

Anton Schwaighofer – F# and Units-Of-Measure for Technical Computing
This was a talk that surprised me the most. F# is the functional language that I’ve made the biggest effort to learn and I thought I understood units-of-measure. To be honest, they hadn’t made a particular impression on me. I was very impressed after Anton’s talk. I really liked the way that the compiler understands how computation affects units. So for example, if you have a function that takes miles and hours as its arguments and returns the miles divided by the hours, the compiler knows that the output is miles per hour.

Ganesh Sittampalam – Functional Programming for Quantitative Modelling at Credit Suisse
Ganesh gave some very practical examples of how his group at CS use both Haskell and F#. Once again the recurring theme was that Haskell is really good for writing DSLs. Ganesh explained how they had written a DSL to create Excel spreadsheets and the challenges that involved. He also explained how they were using F# now as their core application development language.

Duncan Coutts – Strong Types and Pure Functions
I found Duncan’s talk the hardest to follow because of my almost complete ignorance of Haskell. Apparently Haskell allows you specify the side effects that a function is allowed to have in its type signature. By default Haskell is a side effect free language. That immutability or ‘pureness’ allows all kinds of optimisations and is one of the fundamentals of functional programming, but sometimes you have to have side effects. Any IO functions will fall into this category. What Haskell allows you to do is specify with a ‘Monad’ what side effects are allowed. I’m going to have to read a Haskell book.

Robert Pickering – Using Combinators to Tackle the HTML Rendering Problem
Robert showed us an F# DSL to generate HTML and Javascript. I guess it was more interesting from the DSL point of view than the HTML/Javascript generation. The DSL meme runs throughout FP and it’s instructive to see how trivial it is to write a simple DSL in F#. I just didn’t like the example. I have a fundamental distrust of tools that try to hide me from HTML and Javascript; I like HTML and Javascript. We’re only just recovering a back-to-basics approach from the WebForms train wreck so I’m a bit twitchy about this kind of thing.. back off Robert.. OK!

Sunday, December 06, 2009

The Monthly Code Quality Report

Since I started my new ‘architect’ (no, I do write code… sometimes) role earlier this year, I’ve been doing a ‘monthly code quality report’. This uses various tools to give an overview of our current codebase. The output looks something like this:

clip_image002

Most of the metrics come from NDepend, a fantastic tool if you haven’t come across it before. Check out the author, Patrick Smacchia’s, blog.

We have lots of generated code, so we obviously want to differentiate between that and the hand written stuff. Doing this is really easy using CQL (Code Query Language), a kind of code-SQL. Here’s the CQL expression for ‘LoC Failing basic quality metrics’:

WARN IF Count > 0 IN SELECT METHODS /*OUT OF "YourGeneratedCode" */ WHERE 
(   NbLinesOfCode > 30 OR
    NbILInstructions > 200 OR
    CyclomaticComplexity > 20 OR
    ILCyclomaticComplexity > 50 OR
    ILNestingDepth > 4 OR
    NbParameters > 5 OR
    NbVariables > 8 OR             
    NbOverloads > 6 )
AND
!( NameIs "InitializeComponent()"
    OR HasAttribute "XXX.Framework.GeneratedCodeAttribute" 
    OR FullNameLike "XXX.TheProject.Shredder"
)

Here I’m looking for overly complex code and excluding anything that is attributed with our GeneratedCodeAttribute, I’m also excluding a project called ‘Shredder’ which is entirely generated.

NDepend’s dependency analysis is legendary and also well worth a look, but that’s another blog post entirely.

The duplicate code metrics are provided by Simian, a simple command line tool that trolls through your source code looking for repetitive lines. I set the threshold at 6 lines of code (the default). It actually outputs a complete list of all the duplications it finds and it’s nice to be able to run it regularly, put the output under source control, and then diff versions to see where duplication is being introduced. A great way of fighting the copy-and-paste code reuse pattern.

The unit test metrics come straight out of NCover. Since there were no unit tests when I joined the team, it’s not really surprising how low the level of coverage is. The fact that we’ve been able to ramp up the number of tests quite quickly is satisfying though.

As you can see from the sample output, it’s a pretty cruddy old codebase where 27% of the code fails basic, very conservative, quality checks. Some of the worst offending methods would make great entries in ‘the daily WTF’. But in my experience, working in a lot of corporate .NET development shops, this is not unusual; if anything it’s a little better than average.

Since I joined the team, I’ve been very keen on promoting software quality. There hadn’t been any emphasis on this before I joined, and that’s reflected by the poor quality of the codebase. I should also emphasise that these metrics are probably the least important of several things you should do to encourage quality. Certainly less important than code reviews, leading by example and periodic training sessions. Indeed, the metrics by themselves are pretty meaningless and it’s easy to game the results, but simply having some visibility on things like repeated code and overly complex methods makes the point that we care about such things.

I was worried at first that it would be negatively received, but in fact the opposite seems to be the case. Everyone wants to do a good job and I think we all value software quality, it’s just that it’s sometimes hard for developers (especially junior developers) to know the kinds of things you should be doing to achieve it. Having this kind of steer with real numbers to back it up can be very encouraging.

Lastly I take the five methods with the largest cyclometric complexity and present them as a top 5 ‘Crap Code of the Month’. You get much kudos for refactoring one of these :)

Tuesday, November 24, 2009

The joy of MSpec

I’ve just kicked off a new MVC Framework application today. I’ve been considering moving to more behaviour based testing for a while now and I’ve been intrigued by Aaron Jensen’s descriptions of his MSpec (Machine Specifications) framework. So this afternoon I downloaded the build from here and got coding (just click ‘login as guest’ at the bottom).

It’s very nice. Here’s some tests for my container setup:

using System.Web.Mvc;
using Castle.Windsor;
using Machine.Specifications;
using Spark.Web.Mvc;
using Mike.Portslade.Web.IoC;

namespace Mike.Portslade.Web.Tests.IoC.ContainerManagerSpecs
{
    public class when_container_is_created
    {
        Because of = () =>
            container = ContainerManager.CreateContainer();

        It should_register_HomeController = () =>
            container.Kernel.HasComponent("homecontroller");

        It should_register_SparkViewFactory = () =>
            container.Kernel.HasComponent(typeof (SparkViewFactory));

        It should_register_an_IControllerFactory = () =>
            container.Kernel.HasComponent(typeof (IControllerFactory));

        private static IWindsorContainer container;
    }
}

I love the readability of the test code. Sure you have to learn to love the ‘=()=>’, but come on people, lambda syntax is hardly new any more.

When I run this using TestDriven.NET, my test runner of choice for many years now, I get this output:

when container is created
» should register HomeController
» should register SparkViewFactory
» should register an IControllerFactory

The only thing I don’t like is that the ‘It’ has gone missing which is a shame, otherwise this is just what I want from behaviour based testing framework; very low friction and easy to read output.

Well done Aaron, I’m very impressed.

Sunday, November 22, 2009

Make working with open source a breeze with Hornget

If you’ve spent any time working with open source .NET projects like Castle or NHibernate you’ll know the pain of building them from source. At first it seems very easy, you just get the source code from the repository and run the build.bat file or whatever. Any good OS project should just compile into a convenient build directory from which you can grab the assemblies you want.

The pain starts when you have several OS frameworks you are using that have dependencies on each other. Castle and NHibernate are the classic example here. Castle has a dependency on NHibernate and NHibernate has a dependency on Castle. I used to have to go through a complex build-copy-rebuild dance to get the two trunks to work with each other. These days I use a whole suite of open source tools:

Castle
NHibernate
Fluent-NHibernate
Log4net
MassTransit
MvcContrib
Automapper
Spark View Engine

And there are probably others I’ve forgotten. Getting all these from trunk and building them is a major headache.

If you’ve ever played with a Linux distribution, the really cool thing you’ll notice about it compared with the Windows world, is that you can get any software you need by typing (on Ubuntu for example)

apt-get install myFavoriteSoftware

It downloads and installs it on your system along with any dependencies. It’s quite amazing that Windows doesn’t have this yet.

But now this awesomeness is available for open source .NET projects and is supplied by Hornget. Hornget is the work of several members of the Scottish ALT.NET group, most prominently Paul Cowan. All you have to do once it’s downloaded and built (instructions here) is type:

horn –install:nhibernate

Horn will update your local nhibernate source and all its dependencies and then build them in the correct order, copying the dependent assemblies into the correct places for the next build. It uses a Boo based DSL to configure the packages and many of the most important .NET OS projects are represented. I imagine that anyone with a serious .NET OS project is going to want to be included. It’s on my todo list to create a package description for Suteki Shop :)

The list of supported packages is here.

I just had a couple of hiccups getting it working. The first was that I didn’t have Git installed. Git is distributed source control from Linus himself. You can get a windows version from here. You want the ‘Git-1.6.5.1-preview20091022.exe ’ or whatever is latest. I couldn’t use the default Git protocol from work because our firewall blocks the Git ports. There is a work around, but for the time being I’m just using horn from home.

The second irritation was Powershell. Rhino tools, which many other projects have a dependency on uses Psake, the powershell build tool. You have to tell powershell to allow script execution by typing:

set-executionpolicy remotesigned

I was totally confused when after doing this, it still wouldn’t run Psake. It turned out that it was because I’m working on 64 bit 2008 R2 and typed set-execution policy in the 64 bit powershell. The Psake script gets run under x86 powershell which has a separate configuration. After I set the execution policy in the x86 powershell prompt, it all worked. Thanks to Steve Mason for pointing that out on the Hornget group.

Sunday, November 15, 2009

Fun with Linq Aggregate

Say we’ve got a CSV file:

private const string csv = 
@"1,2,3,4,5
6,7,8,9,10
11,12,13,14,15
16,17,18,19,20";

We want to parse it into nested List<int>, and then print out all the numbers on a single line. We might do it like this:

public void ParseCsvAndOutputAsString()
{
    var data = new List<List<int>>();

    foreach (var line in csv.Split('\n'))
    {
        var innerList = new List<int>();
        foreach (var item in line.Split(','))
        {
            innerList.Add(int.Parse(item));
        }
        data.Add(innerList);
    }

    string output = "";
    foreach (var innerList in data)
    {
        foreach (var item in innerList)
        {
            output = string.Format("{0} {1}", output, item);
        }
    }

    Console.WriteLine(output);
}

Yes, I know I should use a StringBuilder and AddRange, but ignore that for the moment, I’m trying to make a point here.

Taking a collection of values and reducing them down to a single value is a very common task in programming. Here we’re doing it twice; first we’re taking a string, splitting it apart and then reducing it down to a single reference to a List<List<int>>; then we’re taking the may items of data and reducing them to a string.

This is so common in fact, that many programming languages have some kind of ‘reduce’ functionality built in. It’s especially common with functional languages. Did you know that C# also has a reduce function? It’s the Aggregate extension method. Here’s the same method written in two statements with it:

[Test]
public void ParseCsvAndOutputAsStringUsingAgregate()
{
    var data = csv
        .Split('\n')
        .Aggregate(
            new List<List<int>>(),
            (list, line) => list.Append(line
                .Split(',')
                .Select(str => int.Parse(str))
                .Aggregate(
                    new List<int>(),
                    (innerList, item) => innerList.Append(item))));

    Console.WriteLine(data
        .SelectMany(innerList => innerList)
        .Aggregate("", (output, item) => string.Format("{0} {1}", output, item)));
}

Aggregate takes two parameters; the first sets up the initial value, in our case we create new instances of List<List<int>>, List<int> and an empty string, this is known as the ‘accumulator’; the second is the function that does the accumulating.

Aggregate works really well with fluent interfaces, where methods return their instance. I’ve added a fluent ‘Append’ extension method to List<int> to help me here:

public static List<T> Append<T>(this List<T> list, T item)
{
    list.Add(item);
    return list;
}

So any time you’ve got a collection of stuff that you want to ‘reduce’ to a single item, remember Aggregate.

Sunday, October 25, 2009

Collection covariance with C# 4.0

Download the code for this post here: http://static.mikehadlow.com/Mike.Vs2010Play.zip

I finally got around to downloading the Visual Studio 2010 beta 2.0 last weekend. One of the first things I wanted to play with was the new covariant collection types. These allow you to treat collections of a sub-type as collections of their super-type, so you can write stuff like:

IEnumerable<Cat> cats = CreateSomeCats();
IEnumerable<Animal> animals = cats;

My current client is the UK Pension’s Regulator. They have an interesting, but not uncommon, domain modelling issue. They fundamentally deal with pension schemes of which there are two distinct types: defined contribution (DC) schemes, where you contribute a negotiated amount, but the amount you get when you retire is entirely dependent on the mercy of the markets; and defined benefit (DB) schemes, where you get a negotiated amount no matter what the performance of the scheme’s investments. Needless to say, a DB scheme is the one you want :)

To model this they have an IScheme interface with implementations for the two different kinds of scheme. Obvious really.

Now, they need to know far more about the employers providing DB schemes than they do about those that offer DC schemes, so they have a IEmployer interface that defines the common stuff, and then different subclasses for DB and DC employers. The model looks something like this:

schemeModel

Often you want to treat schemes polymorphically; iterating through a collection of schemes and then iterating through their employers. With C# 3.0 this is a tricky one to model. IScheme can have a property ‘Employers’ of type IEnumerable<IEmployer>, but you have to do some ugly item-by-item casting in order to convert from the internal IEnumerable<specific-employee-type>. You can’t then use the same Employers property in the specific case when you want to do some DB only operation on DB employers, instead you have to provide another ‘DbEmployers’ property of type IEnumerable<DefinedBenefitEmployer> or have the client do more nasty item-by-item casting.

But with C# 4.0 and covariant type parameters this can be modelled very nicely. First we have a scheme interface

using System.Collections.Generic;
namespace Mike.Vs2010Play
{
    public interface IScheme<out T> where T : IEmployer
    {
        IEnumerable<T> Employers { get; }
    }
}

Note that the generic argument T is prefixed with the ‘out’ keyword. This tells the compiler that we only want to use T as an output value. The compiler will now allow us to cast from an IScheme<DefinedBenefitEmployer>to an IScheme<IEmployer>.

Let’s look at the implementation of DefinedBenefitScheme:

using System.Collections.Generic;
namespace Mike.Vs2010Play
{
    public class DefinedBenefitScheme : IScheme<DefinedBenefitEmployer>
    {
        List<DefinedBenefitEmployer> employers = new List<DefinedBenefitEmployer>();

        public IEnumerable<DefinedBenefitEmployer> Employers
        {
            get { return employers; }
        }

        public DefinedBenefitScheme WithEmployer(DefinedBenefitEmployer employer)
        {
            employers.Add(employer);
            return this;
        }
    }
}

We can see that the ‘Employers’ property can now be defined as IEnumerable<DefinedBenefitEmployer> so we get DB employers when we are dealing with a DB scheme. But when we cast it to an IScheme<IEmployer>, the Employers property is cast to IEnumerable<IEmployer>.

It’s worth noting that we can’t define ‘WithEmployer’ as ‘WithEmployer(T employer)’ on the IScheme interface. If we try doing this we’ll get a compile time error saying that ‘T is not contravariant’, or something along those lines. That’s because T in this case is an input parameter, and we have explicitly stated on IScheme that T will only be used for output. In any case it would make no sense for WithEmployer to be polymorphic; we deliberately want to limit DB schemes to DB employers.

Let’s look at an example. We’ll create both a DB and a DC scheme. First we’ll do some operation with the DB scheme that requires us to iterate over its employers and get DB employer specific information, then we’ll treat both schemes polymorphically to get the names of all employers.

public void DemonstrateCovariance()
{
    // we can create a defined benefit scheme with specialised employers
    var definedBenefitScheme = new DefinedBenefitScheme()
            .WithEmployer(new DefinedBenefitEmployer { Name = "Widgets Ltd", TotalValueOfAssets = 12345M })
            .WithEmployer(new DefinedBenefitEmployer { Name = "Gadgets Ltd", TotalValueOfAssets = 56789M });

    // we can treat the DB scheme normally outputting its specialised employers
    Console.WriteLine("Assets for DB schemes:");
    foreach (var employer in definedBenefitScheme.Employers)
    {
        Console.WriteLine("Total Value of Assets: {0}", employer.TotalValueOfAssets);
    }

    // we can create a defined contribution scheme with its specialised employers
    var definedContributionScheme = new DefinedContributionScheme()
            .WithEmployer(new DefinedContributionEmployer { Name = "Tools Ltd" })
            .WithEmployer(new DefinedContributionEmployer { Name = "Fools Ltd" });

    // with covariance we can also treat the schemes polymorphically
    var schemes = new IScheme<IEmployer>[]{
        definedBenefitScheme,
        definedContributionScheme
    };

    // we can also treat the scheme's employers polymorphically
    var employerNames = schemes.SelectMany(scheme => scheme.Employers).Select(employer => employer.Name);

    Console.WriteLine("\r\nNames of all emloyers:");
    foreach(var name in employerNames)
    {
        Console.WriteLine(name);
    }
}

When we run this, we get the following output:

Assets for DB schemes:
Total Value of Assets: 12345
Total Value of Assets: 56789

Names of all employers:
Widgets Ltd
Gadgets Ltd
Tools Ltd
Fools Ltd

It’s worth checking out co and contra variance, why it’s important and how it can help you. Eric Lippert has a great series of blog posts with all the details:

Covariance and Contravariance in C#, Part One
Covariance and Contravariance in C#, Part Two: Array Covariance
Covariance and Contravariance in C#, Part Three: Method Group Conversion Variance
Covariance and Contravariance in C#, Part Four: Real Delegate Variance
Covariance and Contravariance In C#, Part Five: Higher Order Functions Hurt My Brain
Covariance and Contravariance in C#, Part Six: Interface Variance
Covariance and Contravariance in C# Part Seven: Why Do We Need A Syntax At All?
Covariance and Contravariance in C#, Part Eight: Syntax Options
Covariance and Contravariance in C#, Part Nine: Breaking Changes
Covariance and Contravariance in C#, Part Ten: Dealing With Ambiguity
Covariance and Contravariance, Part Eleven: To infinity, but not beyond

Wednesday, October 14, 2009

TFS Build: _PublishedWebsites for exe and dll projects. Part 2

By default Team Build spews all compilation output into a single directory. Although web projects are output in deployable form into a directory called _PublishedWebsites\<name of project> the same is not true for exe or dll projects. A while back I wrote a post showing how you could grab the output for exe projects and place them in a similar _PublishedApplications directory, and this worked fine for simple cases.

However that solution relied on getting the correct files from the single flat compile output directory. Now, we have exe projects that output various helper files, such as XSLT documents, in subdirectories. So we may end up with paths like this: MyProject\bin\Release\Transforms\ImportantTransform.xslt. But because these subdirectories get flattened by the default TFS build we loose our output directory structure.

This begs the question: why do we need to output everything in this big flat directory anyway? Why can’t we just have our CI build do the same as our Visual Studio build and simply output the build products into the <project name>\bin\Release folders? Then we can simply copy the compilation output to our build output directory.

There’s an easy way to do this with TFS introduced with 2008; simply change the property CustomizableOutDir to true and the TFS build will behave just like a Visual Studio build. Put the following in your TFSBuild.proj file somewhere near the top under the Project element:

 

<PropertyGroup>
  <CustomizableOutDir>true</CustomizableOutDir>
</PropertyGroup>

Aaron Hallberg has a great blog post explaining exactly how this all works. Aaron’s blog is essential reading if you’re doing pretty much anything with TFS. You can still get the directory where TFS would have put the output from the new TeamBuildOutDir property.

Now the TFS build outputs into bin/Release in exactly the same way as a standard Visual Studio build and we can just grab the outputs for the projects we need and copy them to our build output directory. I do this by including a CI.exe.targets file near the end of the .csproj file of any project that I want to output:

<Import Project="..\..\Build\CI.build.targets\CI.exe.targets" />

My CI.exe.targets looks like this:

 

<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

    <PropertyGroup>
    <PublishedApplicationOutputDir Condition=" '$(TeamBuildOutDir)'!='' ">$(TeamBuildOutDir)_PublishedApplications\$(MSBuildProjectName)</PublishedApplicationOutputDir>
    <PublishedApplicationOutputDir Condition=" '$(TeamBuildOutDir)'=='' ">$(MSBuildProjectDirectory)</PublishedApplicationOutputDir>
  </PropertyGroup>
    
    <PropertyGroup>
        <PrepareForRunDependsOn>
      $(PrepareForRunDependsOn);
      _CopyPublishedApplication;
    </PrepareForRunDependsOn>
    </PropertyGroup>

    <!--
    ============================================================
    _CopyPublishedApplication

    This target will copy the build outputs 
  
    This Task is only necessary when $(TeamBuildOutDir) is not empty such as is the case with Team Build.
    ============================================================
    -->
    <Target Name="_CopyPublishedApplication" Condition=" '$(TeamBuildOutDir)'!='' " >
        <!-- Log tasks -->
        <Message Text="Copying Published Application Project Files for $(MSBuildProjectName)" />
    <Message Text="PublishedApplicationOutputDir is: $(PublishedApplicationOutputDir)" />

        <!-- Create the _PublishedWebsites\app\bin folder -->
        <MakeDir Directories="$(PublishedApplicationOutputDir)" />

    <!-- Copy compile output to publish directory -->
    <ItemGroup>
      <ApplicationBinContents Include="$(OutputPath)\**\*.*" />
    </ItemGroup>
    
    <Copy SourceFiles="@(ApplicationBinContents)" DestinationFiles="$(PublishedApplicationOutputDir)\%(RecursiveDir)%(Filename)%(Extension)"></Copy>
    
  </Target>

</Project>

First of all we define a new property PublishedApplicationOutputDir to hold the directory that we want our exe’s build output to be published to. If the TeamBuildOutDir is empty it means that this build has been triggered by Visual Studio, so we don’t really want to do anything. In the target _CopyPublishedApplication we create a list of everything in the build output directory called ApplicationBinContents, and copy it all to to PublishedApplicationOutputDir. Simple when you know how.

Sunday, October 11, 2009

Installing Ubuntu on Hyper-V

 

UbuntuLogo

 

I haven’t played with a Linux distribution for a while, but now there are a couple of things that Linux does that I’m dying to try out. The first is CouchDb, a very cool document database that could provide a great alternative to the relational model in some situations. The second is Mono. I’m very keen to see how easy it would be to serve Suteki Shop using Mono on Linux.

My developer box runs Windows Server 2008 R2, which has to be the best Windows ever. If, like me, you get Action Pack or MSDN, you should consider 2008 R2 as an alternative to Windows 7. Check out win2008r2workstation.com for easy instructions on how to configure it for desktop use. One of the advantages of 2008 R2 is that is comes with Hyper-V, an enterprise grade virtualisation server.

I’m a very occasional Linux dabbler. I first played with Redhat back in 2000 and ran a little LAMP server for a while. I used to keep up with what was happening in Linuxland, but I’ve been out of touch for a few years now, so I only hear the loudest noises coming through the Linux/Windows firewall. One of the loudest voices is definitely Ubuntu, which seems to be the distro of choice these days, so without doing any further investigation, I just decided to go with that.

Installing Ubuntu on Hyper-V is really easy, with only one serious gotcha, which I’ll talk about presently. Just do this:

  1. Download an Ubuntu iso (a CD image) from the Ubuntu site. I chose Ubuntu 9.04 server.
  2. Install Hyper-V. Open Server Manager, click on the Roles node, click ‘Add Roles’ and follow the steps in the Add Roles Wizard.
  3. Setup Hyper-V networking. This is the only thing that caused me any trouble. Open the Hyper-V manager and under ‘Actions’, open ‘Virtual Network Manager’. I wanted my Ubuntu VM to be able to communicate with the outside world, so I initially created a new Virtual Network and selected ‘Connection Type’ –> ‘External’. I also clicked the ‘Allow management operating system to share this network adaptor’ checkbox. Of course I need my developer workstation to have access to the network. However once my Ubuntu VM was up and running my workstation’s network got really slow and flakey, it was like browsing the internet in 1995. As soon as I shut down the VM, everything returned to normal. The Hyper-V documentation suggested that you really didn’t want to check that checkbox, what you should do is have two NICs in your box, one for the host OS and the other for the VMs. OK Microsoft, why is that checkbox there, if what it does so plainly doesn’t work? But, OK, let’s just go with it… So I popped down to Maplin and spent £9 on a cheap NIC and installed it in my developer box. Now I have my Virtual Network linked to the new NIC and the ‘Allow management operating system to share this network adaptor’ unchecked. Both the host workstation and the VM now have their own physical connection to the network, and both are now behaving as independent machines as far as connection speed is concerned.
  4. Create a new Virtual Machine. Also under Actions, click New –> Virtual Machine. I configured mine with 1GB Memory and a single processor.
  5. Once your VM has been created open your new VM’s settings. Under Hardware select DVD Drive, then select ‘Image File’ and browse to your Ubuntu iso.
  6. Also under Hardware, click ‘Add’ and select ‘Legacy Network Adaptor’, point it to the Virtual Network you configured in step 3. Delete the existing Network Adaptor.
  7. Start and connect to the VM. The Ubuntu install is very straightforward and shouldn’t give you any problems. The only thing that bothered me was the incredibly slow screen refresh I got via the Hyper-V connection window. I could see each character drawing on the Ubuntu install screen. One thing that surprised me was that there was no prompt for the root password; Ubuntu asks you to create a standard user/password combination and you are expected to use ‘sudo’ for any admin tasks.
  8. You get to choose some packages to install. I chose LAMP because I know I’ll need Apache and MySQL or PostgresSQL for my experiments. You also need to install Samba if you want your Ubuntu box to be recognised by Windows and have shared directories.

Now you can download PuTTY and log into your Ubuntu server from your workstation. Isn’t that Cool :)

PuTTY_Ubuntu

Thursday, October 08, 2009

Suteki Shop: Big in China

Suteki_shop_big_in_china

To my surprise, China is the biggest source of visits to the Suteki Shop project site. Slightly beating the US and with twice the traffic of the UK. It all seems to be down to a Chinese blogger called Daizhj who has a hugely detailed 10 post series which looks at pretty much every aspect of the project:

Asp.net MVC sample project "Suteki.Shop" analysis --- Installation chapter
Asp.net MVC sample project "Suteki.Shop" Analysis --- Controller
Asp.net MVC sample project "Suteki.Shop" Analysis --- Filter
Asp.net MVC sample project "Suteki.Shop" analysis --- Data validation
Asp.net MVC sample project "Suteki.Shop" Analysis --- ModelBinder
Asp.net MVC sample project "Suteki.Shop" Analysis --- ViewData
Asp.net MVC sample project "Suteki.Shop" analysis --- Model and Service
Asp.net MVC sample project "Suteki.Shop" Analysis --- IOC (Inversion of Control)
Asp.net MVC sample project "Suteki.Shop" analysis --- NVelocity template engine
Asp.net MVC sample project "Suteki.Shop" Analysis --- NHibernate

It’s amazing to find someone giving my little project such love. It makes me feel all warm inside :) It’s not all flattery though; For example, I love his comment in the Installation chapter when talking about this blog: “Unfortunately, the content is pitiful.” :D Harsh, but I can take it.

If you read this Daizhj, drop me a line, I’d love to hear from you.

The .NET Developer’s Guide to Windows Security

image A month ago I started a new reading regime where I get up an hour earlier and head off to a café for an hour’s reading before work. It’s a very nice arrangement, since I seem to be in the perfect state of mind for a bit of technical reading first thing in the morning, and an hour is just about the right length of time to absorb stuff before my brain starts to hit overload.

I’ve had this book sitting on my bookshelf unread for a year or two, so it was the perfect candidate to kick off the new regime.

The book is formatted as a list of 75 items such as; “How to Run a Program as Another User”, “What is Role-Based Security”, “How to Use Service Principle Names”. The author, Keith Brown, has an easy to read style that dispatches answers clearly and expertly. Like all the best technical books, he doesn’t just say how things work, but often includes a little history about why they work that way. He’s also quick to outline best practices and share his opinion about the best security choices.

I think most Windows developers, me included, have a cargo-cult view of Windows Security. We pick up various tips and half-truths over the years and get around most security issues by a process of trial and error. All too often we simply give our applications elevated permissions simply because that’s the only way we can get them to work. A book like this should be essential reading, but unfortunately security is often some way down our list of priorities.

Keith Browns first and often repeated message is that we should always develop as a standard user. I’ve been doing this at home for some years now; in fact my first ever post on this blog back in 2005 was on this very subject. However, I can’t think of a single assignment I’ve had where my client’s developers where not logged in as Administrator. What little I do know about security has come from my standard user development experience, it makes you fully aware of what privileges your software is demanding and I’ve found I’ve been bitten far less by security related bugs. Working as a standard user is a message that’s drummed home throughout the book and is probably the best advice you could take away from it.

I’ve also gained a real insight into the way logon sessions work and how security tokens attach to them. I had no idea that every Windows resource has an owner and the implications of ownership. The sections on Kerberos, delegation and impersonation were also real eye-openers.

So if you too have misty ideas about how security works, you owe to yourself to read this book. Sure it’s not a very sexy subject, but it’ll make you a far better developer.

Monday, October 05, 2009

Brighton ALT.NET Beers! Tuesday 6th October.

The excellent Iain Holder has organised yet another Brighton ALT.NET Beers. It’s on Tuesday the 6th October, which is tomorrow, at the date of posting. The venue is moving from the rather noisy Prince Albert to the somewhat quieter Lord Nelson a little further down Trafalgar street.

image

Map picture

See you there!

Monday, September 28, 2009

MassTransit at Skills Matter

image

This Thursday, 1st October, I’m going to be giving a talk about MassTransit, a lean, open source, service bus for building loosely-coupled, message-based applications. The first half of the talk will be some general thoughts about message based architectures and why we should be considering them. The second half will be an overview of MassTransit. There will be plenty of code, but be warned, I’ve been getting crazy with Powerpoint animations to try and explain some of the concepts.

The talk is hosted by Gojko Adzic and SkillsMatter at the SkillsMatter offices which are located at the junction of St James's Walk and Sekforde Street, just northeast of Clerkenwell Green.

It’s free, but please register on the SkillsMatter site if you wish to attend.

See you there!

Friday, September 18, 2009

Why is ‘Buy’ not always better than ‘Build’

Building software is hard. I’ve been involved in building business systems for 13 years and it’s probably fair to say that I’ve been involved in more failures than successes, and I’m pretty sure that’s not merely the function of my effectiveness (or lack of it) as a software developer. Any business embarking on the development of a core system is taking on significant risk, because the core competencies of building software are probably not the core competencies of the business. After all the business of building software is a huge problem domain in its own right with a large and complex body of expertise.

Recognising this, many businesses when faced with a Buy vs Build decision will do almost anything to avoid Build; purchasing Commercial Off-The-Shelf (COTS) systems is always the preferred decision.

If your business need is a common one, then purchasing a COTS system is almost always the right choice. You would be insane to consider building a bespoke word processor for example.

But often it’s the case that your requirements are somewhat common, but with unique elements that only your business would need.

Given the understandable preference for Buy over Build, the temptation is to take a COTS system, or systems, and either fit your business around it, or have it customised in some way. So long as the features of the COTS software and your requirements overlap it’s worth buying it right?

clip_image002

Sure you will have some unsatisfied requirements and there will be some features of the COTS system that you don’t use, but you are still getting benefit. The unsatisfied requirements can be fulfilled by customising or extending the COTS package and the unused features can just be ignored.

You may have a requirement for customer relationship management for example; why not just buy MSCRM from Microsoft? The things your customers care about might be somewhat unique, but the Microsoft reseller will assure that MSCRM can be customised to meet your requirements. You might also have case-management style requirements alongside your customer relationship ones, why not buy a case management tool as well and simply integrate it with MSCRM?

But now, as soon as you decide to go down the COTS + customisation route, you are effectively doing software development. Except that now you have suckered your way into doing it rather than approaching it with your eyes open. You will find that extending and customising COTS systems is often a horrible variant of ugly hackery that’s almost impossible to do with good software craftsmanship. You will still have to hire developers, but instead of allowing them to build well architected solutions, you will be forcing them to do a dirty and thankless job. You can guess the calibre of developer that will be attracted to such a position.

The reason why it’s hackery is that it’s almost impossible to extend or customise software without rewriting at least a part of it; and of course, you won’t be able to rewrite your closed-source COTS system. To answer this, most COTS vendors pay some lip service to customisability and extensibility, but the hooks they provide are always somewhat clunky and limited. Anyone who has tried to build a system on top of Sharepoint, or customise MSCRM will know the pain involved. It’s the same for pretty much any enterprise software system.

Each additional requirement over and above the out-of-the-box features will cost significantly more than the equivalent additional requirement of a custom system.

You have a graph somewhat like this:

clip_image003

So long as your set of customisations is limited, it’s still worth going down the COTS route, but my experience is that the point where the two lines cross is further to the left than is often appreciated. The chart doesn’t of course factor in the original cost of the COTS system and that can be substantial for many ‘enterprise’ software packages.

A further consideration that’s often ignored is that business requirements often change over time. How are you going to ensure that your COTS system, even if it perfectly matches your requirements now, can continue to match them into the future?

So if you have to do software development, and you will have to unless you can find a COTS system that fully satisfies your requirements, why not do it properly: hire and nurture a great software team; put professional software development practices in place; properly analyse your business; work out exactly what you need from your supporting applications, and then build a system that exactly fits your needs.

An alternative might be to build a close relationship with a bespoke software house, but be aware that their incentives will not be fully aligned with yours and in any case, if your system is substantial, they will probably simply go to the open market and hire the same team of developers that you could have hired yourself.

So consider carefully the costs of a COTS system that doesn’t fully meet your requirements and don’t ignore the benefits that bespoke software built and supported by an in-house team can bring.

Wednesday, September 09, 2009

Slides and Code for last night’s VBug London NHibernate talk

Thanks to everyone who came to last night’s talk on NHibernate, and especially Sam Morrison of VBug for inviting me along and for Anteo Group for hosting the event.

You can download the code and slides from the talk here:
http://static.mikehadlow.com/Mike.NHibernateDemo.zip

The NHibernate homepage is here:
https://www.hibernate.org/343.html

Fluent NHibernate is here:
http://fluentnhibernate.org/

NHibernate Profiler is here:
http://nhprof.com/

NHibernate in Action by Pierre Henri Kuaté, Tobin Harris, Christian Bauer, and Gavin King is essential reading and can be purchased from the publisher, Manning’s, website:
http://www.manning.com/kuate/

image

Tuesday, September 01, 2009

Mocking Delegates

I never realised you could do this; use Rhino Mocks to mock a delegate:

using System;
using Rhino.Mocks;
using NUnit.Framework;

namespace Mike.MockingLambdas
{
    [TestFixture]
    public class Can_a_lambda_be_mocked
    {
        [Test]
        public void Lets_try()
        {
            const string text = "some text";

            var mockAction = MockRepository.GenerateMock<Action<string>>();
            var runsAnAction = new RunsAnAction(mockAction);
            runsAnAction.RunIt(text);

            mockAction.AssertWasCalled(x => x(text));
        }
    }

    public class RunsAnAction
    {
        private readonly Action<string> action;

        public RunsAnAction(Action<string> action)
        {
            this.action = action;
        }

        public void RunIt(string text)
        {
            action(text);
        }
    }
}

Wednesday, August 19, 2009

The first rule of exception handling: do not do exception handling.

Back when I was a VB developer, we used to wrap every single function with an exception handler. We would catch it and write it to a log. More sophisticated versions even featured a stack trace. Yes, really, we were that advanced :) The reason we did that was simple, VB didn’t have structured exception handling and if your application threw an unhandled error it simply crashed. There was no default way of knowing where the exception had taken place.

.NET has structured exception handling, but the VB mindset of wrapping every piece of code in a try-catch block, where the catch catches System.Exception, is still common, I see it again and again in enterprise development teams. Usually it includes some logging framework and looks something like this:

try
{
    // do something
}
catch (Exception exception)
{
    Logger.LogException("Something bad just happened", exception);    
}

Catching System.Exception is the worst possible exception handling strategy. What happens when the application continues executing? It’s probably now in an inconsistent state that will cause extremely hard to debug problems in some unrelated place, or even worse, inconsistent corrupt data that will live on in the database for years to come.

If you re-throw from the catch block the exception will get caught again in the calling method and you get a ton of log messages that don’t really help you at all.

It is much better to simply allow any exceptions to bubble up to the top of the stack and leave a clear message and stack trace in a log and, if possible, some indication that there’s been a problem on a UI.

In fact there are only three places where you should handle exceptions; at the process boundary, when you are sure you can improve the experience of the person debugging your application, and when your software can recover gracefully from an expected exception.

You should always catch exceptions at the process boundary and log them. If the process has a UI, you should also inform the user that there has been an unexpected problem and end the current business process. If you allow the customer to struggle on you will most likely end up with either corrupt data, or a further, much harder to debug, problem further down the line.

Improving the debugging experience is another good reason for exception handling. If you know something is likely to go wrong; a configuration error for example, it’s worth catching a typed error (never System.Exception) and adding some context. You could explain clearly that a configuration section is missing or incorrect and give explicit instructions on how to fix it. Be very careful though, it is very easy to end up sending someone on a wild goose chase if you inadvertently catch an exception you were not expecting.

Recovering gracefully from an expected exception is, of course, another good reason for catching an exception. However, you should also remember that exceptions are quite expensive in terms of performance, and it’s always better to check first rather than relying on an exception to report a condition. Avoid using exceptions as conditionals.

So remember: the first rule of exception handling: do not do exception handling. Or, “when in doubt, leave it out” :)

Friday, July 24, 2009

How MassTransit Publish and Subscribe works

clip_image002

This is a follow-on from my last post, A First Look at MassTransit. Here’s my take on how publish and subscribe works. It’s based on a very brief scan of the MT code, so there could well be misunderstandings and missing details.

The core component of MassTransit is the ServiceBus, it’s the primary API that services use to subscribe to and publish messages. The ServiceBus has an inbound and outbound pipeline. When publish is called, a message gets sent down the pipeline until a component that cares that message type dispatches it to an endpoint. Similarly, when a message is received it is passed down the input pipeline giving each component a chance to process it.

Understanding how the input and output pipelines are populated is the key to understanding how MassTransit works. It’s instructive to get a printout of your pipelines by inspecting them with the PipelineViewer. I’ve created a little class to help with this:

 

using System.IO;
using MassTransit.Pipeline;
using MassTransit.Pipeline.Inspectors;

namespace MassTransit.Play.Helpers
{
    public class PipelineWriter : IPipelineWriter
    {
        private readonly TextWriter writer;
        private readonly IServiceBus bus;

        public PipelineWriter(IServiceBus bus, TextWriter writer)
        {
            this.bus = bus;
            this.writer = writer;
        }

        public void Write()
        {
            writer.WriteLine("InboundPipeline:\r\n");
            WritePipeline(bus.InboundPipeline);
            writer.WriteLine("OutboundPipeline:\r\n");
            WritePipeline(bus.OutboundPipeline);
        }

        private void WritePipeline(IPipelineSink<object> pipeline)
        {
            var inspector = new PipelineViewer();
            pipeline.Inspect(inspector);
            writer.WriteLine(inspector.Text);
        }
    }
}

Let’s look at the sequence of events when RuntimeServices.exe, a subscriber service and a publishing service start up.

When RuntimeServices starts up the SubscriptionService creates a list of ‘SubscriptionClients’. Initially this is empty.

clip_image003

When our subscriber comes on line, it sends an AddSubscriptionClient message to the subscription service. The subscription service then adds our subscriber to its list of subscription clients.

clip_image004

Next our publisher comes on line. It also sends an AddSubscriptionClient message to the subscription service. It too gets added to the subscription clients list.

clip_image005

When the subscriber subscribes to a particular message type, ServiceBus sends an AddSubscription message to SubscriptionService which in turn scans its list of subscription clients and sends the AddSubscription message to each one.

SubscriptionService also adds the subscription to its list of subscriptions so that when any other services come on line it can update them with the list.

The publisher receives the AddSubscription message that was broadcast to all the subscription clients and adds the subscriber endpoint to its outbound pipeline. Note that the Subscriber also receives it’s own AddSubscription message back and adds itself to its outbound pipeline (not shown in the diagram).

clip_image006

The subscriber also adds a component to its inbound pipeline to listen for messages of the subscribed type. I haven’t show this in the diagram either.

When the publisher publishes a message, it sends the message down its outbound pipeline until it is intercepted by the subscriber’s endpoint and dispatched to the subscriber’s queue. The subscription service is not involved at this point.

clip_image007

I hope this is useful if you’re trying to get to grips with MassTransit. Thanks to Dru for clarifying some points for me.

Wednesday, July 22, 2009

A First Look at MassTransit

Get the code for this post here:

http://static.mikehadlow.com/MassTransit.Play.zip

I’ve recently been trying out MassTransit as a possible replacement for our current JBOWS architecture. MassTransit is a “lean service bus implementation for building loosely coupled applications using the .NET framework.” It’s a simple service bus based around the idea of asynchronous publish and subscribe. It’s written by Dru Sellers and Chris Patterson, both good guys who have been very quick to respond on both twitter and the MassTransit google group.

To start with I wanted to try out the simplest thing possible, a single publisher and a single subscriber. I wanted to be able to publish a message and have the subscriber pick it up.

The first thing to do is get the latest source from the MassTransit google code repository and build it.

The core Mass Transit infrastructure is provided by a service called MassTransit.RuntimeServices.exe this is a windows service built on Top Shelf (be careful what you click on at work when Googling for this J). I plan to blog about Top Shelf in the future, but in short it’s a very nice fluent API for building windows services. One of the nicest things about it is that you can run the service as a console app during development but easily install it as a windows service in production.

Before running RuntimeServices you have to provide it with a SQL database. I wanted to use my local SQL Server instance so I opened up the MassTransit.RuntimeServices.exe.config file, commented out the SQL CE NHibernate configuration and uncommented the SQL Server stuff. I also changed the connection string to point to a test database I’d created. I then ran the SetupSQLServer.sql script (under the PreBuiltServices\MassTransit.RuntimeServices folder) into my database to create the required tables.

So let’s start up RuntimeServices by double clicking the MassTransit.RuntimeServices.exe in the bin directory.

clip_image002

A whole load of debug messages are spat out. Also we can see that some new private MSMQs have been automatically created:

clip_image004

We can also launch MassTransit.SystemView.exe (also in the bin folder) which gives us a nice GUI view of our services:

clip_image006

I think it shows a list of subscriber queues on the left. If you expand the nodes you can see the types that are subscribed to. I guess the reason that the mt_subscriptions and mt_health_control queues are not shown is that they don’t have any subscriptions associated with them.

Now let’s create the simplest possible subscriber and publisher. First I’ll create a message structure. I want my message class to be shared by my publisher and subscriber, so I’ll create it in its own assembly and then reference that assembly in the publisher and subscriber projects. My message is very simple, just a regular POCO:

namespace MassTransit.Play.Messages
{
    public class NewCustomerMessage
    {
        public string Name { get; set; }
    }
}

Now for the publisher. MassTransit uses the Castle Windsor IoC container by default and log4net so we need to add the following references:

clip_image008

The MassTransit API is configured as a Windsor facility. I’m a big fan of Windsor, so this all makes sense to me. Here’s the Windsor config file:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <facilities>
    <facility id="masstransit">
      <bus id="main" endpoint="msmq://localhost/mt_mike_publisher">
        <subscriptionService endpoint="msmq://localhost/mt_subscriptions" />
        <managementService heartbeatInterval="3" />
      </bus>
      <transports>
        <transport>MassTransit.Transports.Msmq.MsmqEndpoint, MassTransit.Transports.Msmq</transport>
      </transports>
    </facility>
  </facilities>
</configuration>

As you can see we reference the ‘masstransit’ facility and configure it with two main nodes, bus and transports. Transports is pretty straightforward, we simply specify the MsmqEndpoint. The bus node specifies an id and an endpoint. As far as I understand it, if your service only publishes, then the queue is never used. But MassTransit throws, if you don’t specify it. I’m probably missing something here, any clarification will be warmly received J

Continuing with the configuration; under bus are two child nodes, subscriptionService and management service. The subscriptionService endpoint specifies the location of the subscription queue which RuntimeServices uses to keep track of subscriptions, this should be the location of the queue created when RuntimeServices starts up for the first time, on my machine it was mt_subscriptions. I’m unsure what the managementService specifies exactly, but I think it’s the subsystem that allows RuntimeServices to monitor the health of the service. I’m assuming that the heartbeatInterval is the number of seconds between each notification.

Next, let’s code our publisher. I’m going to create a simple console application, I would host a service with Top Shelf in production, but right now I want to do the simplest thing possible, so I’m going to keep any other infrastructure out of the equation for the time being. Here’s the publisher code:

using System;
using MassTransit.Play.Messages;
using MassTransit.Transports.Msmq;
using MassTransit.WindsorIntegration;

namespace MassTransit.Play.Publisher
{
    public class Program
    {
        static void Main()
        {
            Console.WriteLine("Starting Publisher");

            MsmqEndpointConfigurator.Defaults(config =>
            {
                config.CreateMissingQueues = true;
            });

            var container = new DefaultMassTransitContainer("windsor.xml");
            var bus = container.Resolve<IServiceBus>();

            string name;
            while((name = GetName()) != "q")
            {
                var message = new NewCustomerMessage {Name = name};
                bus.Publish(message);
                
                Console.WriteLine("Published NewCustomerMessage with name {0}", message.Name);
            }

            Console.WriteLine("Stopping Publisher");
            container.Release(bus);
            container.Dispose();
        }

        private static string GetName()
        {
            Console.WriteLine("Enter a name to publish (q to quit)");
            return Console.ReadLine();
        }
    }
}

The first statement instructs the MassTransit MsmqEndpointConfigurator to create any missing queues so that we don’t have to manually create the mt_mike_publisher queue. The pattern used here is very common in the MassTransit code, where a static method takes an Action<TConfig> of some configuration class.

The next line creates the DefaultMassTransitContainer. This is a WindsorContainer with the MassTransitFacility registered and all the components needed for MassTransit to run. For us the most important service is the IServiceBus which encapsulates most of the client API. The next line gets the bus from the container.

We then set up a loop getting input from the user, creating a NewCustomerMessage and calling bus.Publish(message). It really is as simple as that.

Let’s look at the subscriber next. The references and Windsor.xml config are almost identical to the publisher, the only thing that’s different is that the bus endpoint should point to a different msmq; mt_mike_subscriber in my case.

In order to subscribe to a message type we first have to create a consumer. The consumer ‘consumes’ the message when it arrives at the bus.

using System;
using MassTransit.Internal;
using MassTransit.Play.Messages;

namespace MassTransit.Play.Subscriber.Consumers
{
    public class NewCustomerMessageConsumer : Consumes<NewCustomerMessage>.All, IBusService
    {
        private IServiceBus bus;
        private UnsubscribeAction unsubscribeAction;

        public void Consume(NewCustomerMessage message)
        {
            Console.WriteLine(string.Format("Received a NewCustomerMessage with Name : '{0}'", message.Name));
        }

        public void Dispose()
        {
            bus.Dispose();
        }

        public void Start(IServiceBus bus)
        {
            this.bus = bus;
            unsubscribeAction = bus.Subscribe(this);
        }

        public void Stop()
        {
            unsubscribeAction();
        }
    }
}

You create a consumer by implementing the Consumes<TMessage>.All interface and, as Ayende says, it’s a very clever, fluent way of specifying both what needs to be consumed and how it should be consumed. The ‘All’ interface has a single method that needs to be implemented, Consume, and we simply write to the console that the message has arrived. Our consumer also implements IBusService, that gives us places to start and stop the service bus and do the actual subscription.

Here’s the Main method of the subscription console application:

using System;
using Castle.MicroKernel.Registration;
using MassTransit.Play.Subscriber.Consumers;
using MassTransit.Transports.Msmq;
using MassTransit.WindsorIntegration;

namespace MassTransit.Play.Subscriber
{
    class Program
    {
        static void Main()
        {
            Console.WriteLine("Starting Subscriber, hit return to quit");

            MsmqEndpointConfigurator.Defaults(config =>
                {
                    config.CreateMissingQueues = true;
                });

            var container = new DefaultMassTransitContainer("windsor.xml")
                .Register(
                    Component.For<NewCustomerMessageConsumer>().LifeStyle.Transient
                );

            var bus = container.Resolve<IServiceBus>();
            var consumer = container.Resolve<NewCustomerMessageConsumer>();
            consumer.Start(bus);

            Console.ReadLine();
            Console.WriteLine("Stopping Subscriber");
            consumer.Stop();
            container.Dispose();
        }
    }
}

Once again we specify that we want MassTransit to create our queues automatically and create a DefaultMassTransitContainer. The only addition we have to make for our subscriber is to register our consumer so that the bus can resolve it from the container.

Next we simply grab the bus and our consumer from the container and call start on the consumer passing it the bus. A nice little bit of double dispatch :)

Now we can start up our Publisher and Subscriber and send messages between them.

clip_image010

clip_image012

Wow it works! I got a lot of childish pleasure from starting up multiple instances of my publisher and subscriber on multiple machines and watching the messages go back and forth. But then I’m a simple soul.

Looking at the MSMQ snap-in, we can see that the MSMQ queues have been automatically created for mt_mike_publisher and mt_mike_subscriber.

clip_image014

MassTransit System View also shows that the NewCustomerMessage is subscribed to on mt_mike_subscriber. It also shows the current status of our services. You can see that I have been turning them on and off through the morning.

clip_image016

Overall I’m impressed with MassTransit. Like most new open source projects the documentation is non-existent, but I was able to get started by looking at the Starbucks sample and reading Rhys C’s excellent blog posts. Kudos to Chris Patterson (AKA PhatBoyG) and Dru Sellers for putting such a cool project together.