By Chris R. Chapman at December 06, 2007 02:52
Filed Under: alt.net, amuse, tdd, unit testing

In the thread Breaking Unit Tests:

On Nov 27, 2007 8:02 PM, Scott Bellware <sbellware@...> wrote:

On Nov 27, 2007, at 8:27 PM, Steven Mitcham wrote:

> To me, it would seem that if the tests that are breaking have
> nothing to do with the change, that would be an indication of
> either over specified tests, or badly coupled code.
+1

We're trying to create test harnesses, not test shackles.


This should be the official motto for unit testing...

 

By Chris R. Chapman at November 16, 2007 07:54
Filed Under: refactoring, rnchessboardcontrol, tdd, unit testing

Greetings and welcome to the fourth installment where you, gentle blog reader and better practices enthusiast, have the opportunity to pair-program with me as I tackle some old and hoary code I wrote four years ago in an attempt to buff it into respectability.  While the app in question is a .NET Windows Forms component that implements a chess board, this series is focusing on the rules engine behind it.

Today, I’ll be demonstrating how I crafted unit tests to exercise the rules engine with replays of classic (and not so classic) chess games – specifically, I want to ensure that I have a baseline of tests that demonstrate legal moves, captures, checks and mates.

Housekeeping

In my last post, I mentioned that the goal for this article would be to have the source code available for download via an online repository – as of Monday, I set up a Google Code Subversion repository for this purpose and will be using it going forward so that you can follow along with the revisions to the code as they’re made.  I also mentioned that we’d be getting into some beefier tests – while this is the case, it’s not as beefy as I’d have liked as I ran into some unexpected difficulties.  More on that later.

Unit Test Goal: Replaying Classic Chess Games

I the last installment, I wanted to craft some unit tests that I could use to replay classic chess games from history like Capablanka v. Alekhine 1927 or Stahlberg v. Petrov 1938 – the purpose was manifold:

  • These games are usually 25+ moves in length, and because of the skill of the players involved, incorporate a wide array of moves and techniques;
  • Because the games are historic, I know that each move is legal and the outcome for each move;
  • Pushing the moves into the engine would validate the baseline behaviour for the MakeMove() method;
  • Most importantly, the returned PlayerMoveResult struct type provides metadata to validate the outcomes for each half move (per side) telling us a great deal about how the internals of the engine are functioning.

With this objective in mind, I set about thinking of how to capture or generate test data for classic games that I could easily consume inside a unit test.  My first instincts were to set up a delimited file with the algebraic coordinates for each move, along with validation flags for the move type and outcome – ooh!  Maybe use XML, have some XPath queries to fetch the half moves and outcomes…

Stop.  Stop it right there!  As cool as this would be, it was also introducing complexity where I didn’t need it since I’d now have to be accountable for ensuring that the code to process the XML was tested and valid.  Instead, I decided to do the simplest thing that works and began roughing-in a simple string array with delimited fields that would identify each half move along with corresponding expected outcomes for move type and result:

   34 class TestGames

   35     {

   36         // Capablanca vs. Mieses 1913

   37         public static string[] Capablanca_Mate_Mieses_1913 = new string[]

   38             {

   39                 "D2-D4//nc/G8-F6//nc",

   40                 "G1-F3//nc/C7-C5//nc",

   41                 "D4-D5//nc/D7-D6//nc",

   42                 "C2-C4//nc/G7-G6//nc",

   43                 "B1-C3//nc/F8-G7//nc",

   44                 "E2-E4//nc/E8-G8//O-O",

   45                 "F1-E2//nc/E7-E6//nc",

   46                 "E1-G1//O-O/E6-D5//x",

   47                 "E4-D5//x/F6-E8//nc",

   48                 "F1-E1//nc/C8-G4//nc",

   49                 "F3-G5//nc/G7-C3//x",


My intent here is to have a simple string delimited with forward-slashes “/” to indicate each half move, result and type starting with white:

   12     /// The strings for each move are structured in the following manner:

   13     /// "{white from}-{white to}/{white result}/{white move type}/

   14     /// {black from}-{black to}/{black result}/{black move type}"

 

Next, I wanted to validate the result for each move and indicate if the move resulted in material capture, castling, en passant capturing, pawn promotion, etc.  I decided to borrow the codes and symbols for move results and types that are used in Standard Algebraic Notation for games to keep things intuitive and simple:

   19     /// {result} has the following expected values:

   20     /// "": Legal move, no immediate consequence

   21     /// "!": Illegal move

   22     /// "+": Move has placed opponent in check

   23     /// "++": Move has placed opponent in check mate

   24     ///

   25     /// {move type} has the following expected values:

   26     /// "x": Capture

   27     /// "nc": Non-capturing move

   28     /// "O-O": King-side castle

   29     /// "O-O-O": Queen-side castle

   30     /// "xep": En-passant capture

   31     /// "!": Illegal move

   32     /// "^": Pawn promotion

 

So far, so good.  However, I soon began to realize a new problem with the arrays – they’re laborious to construct and prone to errors when entered by hand, especially because I was transcribing the moves from a book that used the old-style notation.  I solved this problem by creating a quick WinForms app that used the existing chessboard control to capture moves and transcribe them into a format I could easily cut & paste into my array definitions:

Quick_board_app

This might seem like cheating:  After all, if my purpose of tests is to validate the engine’s behaviour, how can I be certain that what I’m capturing here is valid to use as input?  Simply put, I’m using my own knowledge to verify that the moves and results correspond to the outcomes for the game, ie. captures, checks and mates – in other words, I’m following the first principle in Hunt & Thomas’ maxim about unit tests:  Are the results right?

For the curious, I’ve added the above app to the solution under the RNChessBoardControl.DemoApp project which you’ll notice when you pull down the latest revision 11 or later of the code.  I won’t delve into this for now so as to keep things brief, but you can easily see by viewing the code that it’s quite lightweight and didn’t take too much effort to build out.  In this situation, crafting a quick tool that will save time later on is worth the time investment, especially to get test input data that we can be confident about.

Writing the Playback Tests

Using the tool, I captured three games into static string arrays that placed into a separate class, TestGames, which I kept in the same namespace as my test fixture for easy reference.  At a high level, my plan for the playback test was simple:

  1. Iterate over the array for a sample game;
  2. Split each move string into constituent parts for the moves and results for each side;
  3. Push each move into the engine;
  4. Use Assertions to validate the move result and type.

My first cut at this method had everything happening in a single method so I could capture the essence of what I wanted to do:

Rev 6: makemove_fullgametomate_legal()

Definitely not pretty, however I wanted to capture the logic and validate the behaviours.  There was definitely a little trial and error that I had to overcome with respect to parsing the moves, which I eventually consigned to some Regex pattern matching.  Also note the GetMoveResultStringAsSAN() and GetMoveTypeStringAsSAN() methods which convert the PlayerMoveResult and MoveType enumerations returned for each move into strings that correspond to the SAN codes I’m using in the arrays:

Rev 6: GetSAN_helper_methods

While this test ran and corroborated the results, it needed to be refactored because it wasn’t very well focused – in other words it was exhibiting some code smells.

Step 1:  Apply the Extract Method Refactoring

In order to address my smelly code, it seemed a good idea to split the array iteration/move entry code from the test method.  Easy-peasy:  Using the Refactor! Pro Visual Studio add-in, all I needed to do was highlight the code within the method definition and then click the elipses (…) that appeared below to trigger the Extract Method refactoring:

Extract_method_makemove_fullgametomate

Clicking Extract Method, I’m prompted to specify where I want the new method to go and what it would be called:

Extract_method_makemove_fullgametomate_2

I decided to move the new method below the test method stub, so I hit the down key and then enter – presto, our method has been riven in twain:

Extract_method_makemove_fullgametomate_3

In order to make this change more meaningful, I decided to rename the method and add an argument for passing in a string array for the test game:

Extract_method_makemove_fullgametomate_4

I then rebuilt the tests and ran them to verify everything was in working order before committing the changes to the repository under revision 8, and then added additional test methods for the other two games in revisions 9 and 10.

Step 2:  DRY Refactorings

In the Pragmatic Programmer text, Hunt and Thomas introduce a concept that serves as the root for many refactorings called the DRY Principle or “Don’t Repeat Yourself”.  This means we should avoid duplicating code wherever possible and collapse those segments we find into separate methods.  If we look in the PlayTestGame method, there’s definitely some duplication happening:

Dry_playtestgame_white

The code segment above is practically identical to the block for validating the black test game move – the only differences are the string elements from the moveParts array that identify the moves and expected results and type.  Refactor! Pro suggested that I extract this block into a new method with two arguments: moveIndex and moveParts[]:

Dry_playtestgame_white_suggestion

Close, but no banana.  The suggested method, GetPmr(), while a step in the right direction is locked to the ordinal positions for white’s move from the split array (ie, indices 0,1,2) – I’d end up creating a duplicate method for black’s move!  It's also rather poorly named for my purposes.  I decided to let the tool stub-out the method for me, but renamed it and changed the signature slightly:

Dry_playtestgame_white_1

I applied some renaming refactorings to make the intent of the method more generic for its new purpose:

Dry_playtestgame_white_seg2

All I had to do now was revise the calling method arguments, build and test.  Note that at this point, I did not apply the refactoring the black move methods below – I wanted to confirm that the refactored code I had introduced was in fact working:

Dry_playtestgame_white_seg3
Dry_playtestgame_white_success

Awesome – all that was needed now was to replace the duplicate code for black with a similar method call to ExecuteTestGameMove() and pass in the elements from moveParts[]:

Dry_playtestgame_black_refactor

As we can see, even here in our test fixture we’ve gained some advantages for refactoring our code – our tests will be easy to read and maintain and we’ve consolidated four Asserts into two within an isolated method.

Rounding out this session, I finished up with additional test methods to run the two other sample games I added to the TestGames class.

Next Steps…

For the next installment in this series, I’ll add some additional test methods and finally get back to tackling the refactorings I’ve wanted to introduce to the DoMove() method.  Check the code repository for updates to the codebase as I won’t be writing about them all as we go forward – in fact, there are additional tests for some edge cases that I’ll be adding over the next day or so.  Cheers!

By Chris R. Chapman at November 15, 2007 11:01
Filed Under: .net, asp.net, better practices, design patterns, unit testing
Update:  'Seems I'm not the only one who's excited about the TDD potential that ASP.NET MVC opens up.  Phil Haack is already all over this one with his post Writing Testable Code Is About Managing Complexity.

I’m trying to curb my enthusiasm here, but I’m excited after reading this first part of ScottGu’s in-depth coverage on his team’s upcoming ASP.NET MVC Framework that we can expect to see in VS2008.

What’s got me really keyed-up is the potential that the MVC model will give us for unit testing web applications.  I could care two figs about the latest syntactic sugar that the product teams think we all want – they’re useless if I can’t do TDD (test-driven-development) and CI (continuous integration) against them.  And ASP.NET web apps have been public enemy #1 for most attempts to unit test – witness: SharePoint.

Here’s how Scott sees TDD happening under MVC (emphasis mine):

…part of what makes an MVC approach attractive is that we can unit test the Controller and Model logic of applications completely independently of the View/Html generation logic.  As you'll see below we can even unit test these before we create our Views.

The ASP.NET MVC framework has been designed specifically to enable easy unit testing.  All core APIs and contracts within the framework are interfaces, and extensibility points are provided to enable easy injection and customization of objects (including the ability to use IOC containers like Windsor, StructureMap, Spring.NET, and ObjectBuilder).  Developers will be able to use built-in mock classes, or use any .NET type-mocking framework to simulate their own test versions of MVC related objects.

Mvc_testproject

Pct_test

The striking thing I see out of all of this is that Scott is envisioning an entirely new way of advancing the development paradigm:

  1. ASP.NET MVC incorporates a best practice design pattern that’s long been known outside of our camp and one we’ve really wanted;
  2. It’s taking some positive cues from our Ruby on Rails bretheren (eg. Models, Views and Controllers folders, ease of testing);
  3. It is an alternative to, not a replacement for standard ASP.NET webforms – you can quite happily live in a coding cave if you wish;
  4. You may use integrated MSFT technologies for testing and dependency injection, or use what you like, eg. NUnit, xUnit, Windsor, etc.

I like this.  I like this a lot.  I’m also hoping to hell that the higher-ups don’t foul this up for the rest of us – ScottGu’s on the right wavelength here and I’m really impressed he’s advancing so far with this.

By Chris R. Chapman at November 07, 2007 00:51
Filed Under: .net, refactoring, rnchessboardcontrol, tdd, unit testing

Welcome back for the third installment in my series (see Part 1, Part 2) wherein I chronicle my efforts refactoring some code I wrote over four years ago to implement a chess rules engine and WinForms control using C# under .NET v1.1.  I’m hoping to demonstrate over the next few posts a few tips, techniques and tricks to improve established code with refactorings, unit tests and application of best practices that you might find useful in your own endeavours.

Last week I mentioned two tools that I’ll be using from this point forward to support constructing and running unit tests so we can backstop our new code against making destructive changes that would otherwise bust the build or introduce new bugs that we didn’t intend:  NUnit and TestDriven.NET.  For the impatient out there, we will be getting into refactoring soon, but as we’ll see below, I ran into a little issue as a result of setting up my tests that necessitated some backtracking.

Briefly, here’s a quick overview of each tool:

NunitIn spite of rumours on its demise, NUnit is very much the de facto if not the de jure free (as in beer) unit test suite for the .NET Framework.  Yes, Visual Studio 2005 in the Team Editions supports its own unit test framework, which looks a lot like NUnit tests with differently-named method and class decorators, but I prefer the original.  NUnit is modeled after JUnit, the Java unit test suite which in turn was based on seedwork by Kent Beck in Smalltalk – it has, therefore, a pretty solid pedigree.

NUnit provides you with a framework for writing regression test assertions to validate code behaviours along with both a console and GUI test runner harness.  If you haven’t done any Test Driven Development (TDD) before, this will be a pleasant experience as you won’t have to roll-your-own test harnesses to see if your code works – you’ll just run the test assemblies and get immediate feedback on whether they passed or failed.

Another upshot to using a framework like NUnit is the option of later “hooking” your tests into a continuous integration suite like CruiseControl.NET which can synthesize your code versioning, build and tests to run automatically on code check-ins or other triggering events.

I’ve installed the latest stable version of NUnit for this exercise: 2.4.3 for .NET 2.0.

Testdriven_netWhile NUnit is the cat’s pajamas for unit testing, and does support some IDE integration, it lacks a certain panache since its test-running goodness is run via an external app.  This can be a throw-the-hands-in-the-air point for some folks who just cannot live without doing everything in Visual Studio, and will forever despise writing and running unit tests without some IDE support.  So, to kick things up a notch and keep these folks interested, on-board and insanely productive, I recommend downloading and installing the TestDriven.NET Visual Studio add-in

I won’t go much further into the addin except to say that it allows you the ability to run your tests simply by right-clicking inside either the test fixture class or test method – easy peasy.  If you want to know more, check out the developer’s site for excellent overviews and tutorials.  Ok, here’s a quick screen cap, just to whet the appetite:

Testdriven_net_teaser

I’m using the latest stable version: 2.8.2130 – It runs in all versions of Visual Studio except the Express Editions, which is for a whole host of reasons.

Getting ready to test:  Adding a test fixture project

Once NUnit is installed, all of its core libraries are available to all and sundry projects through the GAC, so all we need to do is create a new library for housing our test classes (aka Test Fixtures) and add a reference to the NUnit.Framework to get access to the all the unit testing goodness.  There are a number of schools of thought on how tests could/should be organized and named – below is my preference:

Vs_new_tests_project

In this way, I create an NUnit test assembly for each assembly or executable I’m testing and it is clearly identified with the .Tests suffix.  Now, I’ll add references for the RNChessRulesEngine and RNChessBoardCommonTypes projects, since there are dependencies between them;  next, I’ll add a reference to the nunit.framework assembly via the .NET tab in the Add Reference window.  Finally, I’ll rename the stubbed-out class ChessRulesEngine.TestFixture.cs, add some namespace references and an  NUnit TestFixture attribute to indicate that this class will be used for containing test code:

Vs_chessrulesengine_testfixture_start

Now that I’m this far, I’ll do a quick build and commit the new project to source control before proceeding.

Supporting the rules engine with preliminary unit tests 

In the last installment, I decided to start my refactoring within the DoPlayerMove(int32,int32) method in the rules engine as it had not only a high cyclomatic complexity score, but was also an obvious entry point for running subsequent routines.  This brings up an some questions:  How do we know if the engine is running correctly right now?  Subsequently, what kind of tests should we be writing?

Note:  If you’ve never written a unit test before, don’t panic:  It’s dead easy.  Check out the NUnit Quickstart tutorial for some examples.

At a fundamental level, we want to craft tests that will demonstrate that the basic operations of the engine are sound, ie. that we trap for bad input, process moves correctly and can play out a game to mate.  My first inclination is to code up a test that would play a well-known, classic game like Capablanca vs. Alekhine 1927 – French Exchange.  This way, I know for certain what the outcomes after each move should be, the material captured, number of half moves (white) and full moves (black) that have occurred, check states for either side’s king and checkmate state.

SAN_diagram

Algebra rears its ugly head.  Well, sort of.

Sounds like we have a plan, right?  Of course, there’s always a little trip-up:  DoPlayerMove() is a private method that is called by another private method, CommitPlayerMove(), which in turn is called by the publicly-accessible MakeMove().  We have our work cut out for us:  In order to run a sample game through the engine, we need to validate the behaviours for MakeMove() which takes input from two strings representing the Standard Algebraic Notation (SAN) for the “from” and “to” squares, using the numbers 1–8 and letters A-H to represent the ranks and files (rows and columns of squares)respectively.  Thus, if we want to make a basic Queen’s pawn opening, we’d want to move from “D2” to “D4”. 

Let’s build-out a test to verify that the input “D2” and “D4” results in a valid move.  First, we need to add a member object for the rules engine so we can test it, and a special NUnit method to instantiate the object when the battery of tests are run for the first time called TFSetUp():

Vs_testfixture_setup

Note the [TestFixtureSetUp] attribute that tells the NUnit framework to run this method exactly once for every run of our test fixture class.  Next, let’s add a method to exercise MakeMove() with the SAN I mentioned above:

Vs_testfixture_test_d2d4_fails

A few things to note here: 

  1. The [Test] attribute, which indicates that this method is a unit test and should be executed by the NUnit framework;
  2. The test method name follows a convention to make it easy to identify (hat tip to Roy Osherove) the method, test state and expected outcome for the test. 
  3. I’m using a classic NUnit assertion, Assert.AreEqual(), which compares an expected value (MoveResult.Legal enum) against the actual value returned from the PlayerMoveStatus property, which is a struct defined in RNChessDataTypes.cs.  I’ve also added a descriptive message that will be returned when the test fails so that we can get meaningful feedback.

Observant keener types will notice that in my Assertion, I’m expecting MoveResult.ILLEGAL.  This is because I want the test to fail first – this way I can be assured that the method is running correctly.  Right-click inside the test method and select Run Test(s) from the context menu – you’ll see the following in the Output pane:

Testdriven_d2d4_fail_output

Blammo!  We have our first validation point that the engine is working correctly – it’s taking input for a valid move and is returning a confirmation.  Moving on, we adjust the method to check for PlayerMoveResult.LEGAL and the we get a pass:

Testdriven_d2d4_pass_output

Let’s add another test to check out an illegal opening move – say the King’s pawn, three squares forward.  We’ll follow the same pattern as above:

Testdriven_e2e5_fail_output

And the same drill as before:  Run the test, watch it fail, then adjust to assert on MoveResult.ILLEGAL; run the test and it should pass easily.

Note that I’ve re-instantiated ChessRulesEngine on line 41 – I wanted to revert the object to its default state so that I could properly test the opening move.  However, why not use the object’s ResetBoard() or UndoLastMove() methods to do the same thing?  Because we want to make sure that our tests can be run in isolation of each other and test only the operations we’re controlling.  By calling in ResetBoard() or UndoLastMove() inside a test method, we’re assuming that they work properly – until we know otherwise, we need to do like Fox Mulder and trust no-one!

So, I need to refactor my test code.  I want to reset the state of the ChessRulesEngine object before each test without having to new one up.  We can do this via a method decorated with the SetUp attribute: 

Vs_test_setup

We can take out line 41 in the MakeMove_OpenE2E5_Illegal() test method, build and run the tests for sanity and revel in our Homer Simpson-style productivity gains.  Don’t forget, we’re starting off easy here – this will pay dividends later on.

Defining What to Test

So far, it seems like we’re being somewhat arbitrary in our tests – while testing this input is good, and we do have a goal for writing a test to run through a sample game, we should have a few “rigor rules” to guide our efforts to make sure we’re writing valuable tests and not just ad hoc queries to satisfy curiosity.  This is where I like to refer to advice from Andy Hunt and Dave Thomas, the original Pragmatic Programmers.

In their book, Pragmatic Unit Testing in C# with NUnit, 2nd Ed., Hunt and Thomas wrap up their experiences for writing top-notch unit tests with three acronyms:

  • Good tests are A-TRIP: Automatic, Thorough, Repeatable, Independent and Professional
  • Write tests using your RIGHT-BICEP
  • Ensure that boundary conditions are CORRECT

I won’t rehash the acronyms here (you can check out the link above), however, suffice to say that we need to write tests that focus on one thing at a time, can be run independent of one another (ie, they don’t require prior tests to be run first) and exercise the code’s extremities, ie. anything that might break and anything that does break.

Let’s Try Breaking MakeMove()

To round out this post, let’s try writing a test to push in bad input to the MakeMove() method to see what happens.  I have no idea what to expect, so I decide to try this:

Testdriven_a9a10_fail_1

Running the test, I got the following results:

Testdriven_a9a10_fail_1_output

A-ha!  Now we have grounds for a new assertion – the engine does appear to validate input, throwing an ArgumentException if things are out of whack.  We can test for expected exceptions by adding the ExpectedException attribute to our test definition:

Testdriven_a9a10_pass_1

I’ve dispensed with writing this test to fail-first with a different exception, since my initial run provided me with the expected exception.  An important thing to note here is that I’m not writing the tests to pass, but using my knowledge of the system and code as I go to validate assumptions – which can change later on, depending on refactorings.

Next:  More tests, web-accessible source

For the next installment, I’ll be getting into more meaty tests to validate inputs using Hunt & Thomas’ unit test guidelines and  I’m also going to look into setting up web access for the source tree to provide the latest versions of the code via Subversion as we go along.  As part of this effort, I’ll be writing some tests that I’ll share to validate the moves in a historical game.  Until then, as always I welcome input on the posts so far and any constructive criticisms.  Cheers!

By Chris R. Chapman at March 13, 2007 23:13
Filed Under: pex, tdd, unit testing
Earlier this week I learned of a new Microsoft Research labs project called “Pex” (short for Program EXploration) that posits an interesting approach to crafting unit tests - more specifically, how to automate the coverage of the unit tests you may write.  For the uninitiated, “test coverage” is a moderately controversial topic in the Test-Driven-Development world that suggests unit tests should aim for 100% code coverage in order to be valuable as a regression validation tool.
 
At first, I was skeptical about Pex - I mean, after the debacle with VSTS back in '05 where MSDN docs actually suggested auto-generating unit test stubs thereby negating the whole point of changing developer habits to write tests first, I thought Pex was another run at the windmill from it's description:
 
Pex is an intelligent assistant to the programmer.  By automatically generating unit tests, it alllows to [sic] find bugs early.  In addition, it suggests to the programmer how to fix the bugs.
 

As it turns out, this is a bit misleading;  what Pex is actually all about is a new style of unit test composition:  Parameterized unit tests.  This is interesting!

So, what is a parameterized unit test?
Take your run-of-the-mill unit test (this is from the site):

Nothing special here - just validating a hashtable.  Trouble is, we're baking-in some arbitrary values to validate our assertion.  What if we need to check several different sets of values for a more complex object?  We need to either: a) write more tests for each case, or b) invoke a bad practice and lump a lot of representative cases into a single test, or c) write a loop inside the test to go through several sets of values.  In other words, tedium which needs to be maintained as the system grows.

Now, take a look at the same unit test written as a Pex parameterized test:

Already, we see the first significant difference:  The test has parameters!  And these arguments are then fed into the test, suggesting by design that this unit test is going to be more flexible than its predecessor.  Pex doesn't stop here, though:  It is capable of generating the inputs for the test.  How?

Again, some interesting things are going on here:  Pex actually monitors the execution of the test by using random inputs based on the arguments.  Next, it uses a constraint solver to ensure that every set of inputs made actually exercise different code paths within the test and objects under test.  Thus, we have significantly improved code coverage.

This is a really radical approach to computer-assisted unit testing - Visual Studio will in effect become an intelligent aide to developers who will still be on the hook for writing tests, but relieved from the tedium of providing supporting code to ensure higher code coverage.

Pex is still under development, so we probably won't see this until after Orcas is released - too bad!  In the meantime, check out the site and have a look at the screencast which shows how this tool works within the VS2005 IDE - it's definitely raised my eyebrows.

About Me

I am a Toronto-based software consultant specializing in SharePoint, .NET technologies and agile/iterative/lean software project management practices.

I am also a former Microsoft Consulting Services (MCS) Consultant with experience providing enterprise customers with subject matter expertise for planning and deploying SharePoint as well as .NET application development best practices.  I am MCAD certified (2006) and earned my Professional Scrum Master I certification in late September 2010, having previously earned my Certified Scrum Master certification in 2006. (What's the difference?)