By Chris R. Chapman at November 29, 2007 11:37
Filed Under: amuse, hacks
Update:  See Part 2 of my investigation.

Tassimo_machineLike many techno-geeks, one of my favourite kitchen appliances is the coffee machine.  Mine happens to be a Braun Tassimo T1200, which is by far and away the coolest coffee machine because it is actually a programmable beverage dispensing system.  With it I can make anything from a run-of-the-mill cup of Maxwell House to espresso, latte, cappucino, hot chocolate and everything inbetween.  And with almost zero cleanup.

What makes this possible are the pre-packaged, sealed pods (called T-DISCS) for each drink type that has a barcode printed on its lower surface.  Once placed in the machine, a scanner reads the barcode on the pod which then “programs” the Tassimo to prepare the bevvy with the precise amount of water at an exact temperature for a specific time.  The result is amazingly consistent quality from cup to cup!


Pictured above are three Tassimo coffee T-DISCs that are available here in Canada (Starbucks is coming soon!) and a plastic cleaning disc that’s used for descaling, etc.  As you can see, each features a different barcode to “program” the machine accordingly.  Of course, being a geek I just had to know how these codes worked to make my daily cup of bliss, so I set about to reverse-engineer/hack my Tassimo.

Starting the Hack – Reading the T-DISC Barcodes

I began toying with how the Tassimo brews coffee by examining the effect of swapping the barcodes – I used my color scanner/printer to make a replica of the Cafe Crema barcode and put it over top of the code for the Columbian coffee pod to see if it would give a similar frothy head.  No dice, but I was still curious how the codes configured the machine’s settings for each brew.  So, my first plan of attack was to decipher the barcodes to see if I could observe any patterns.

Step 1:  Scanning the T-DISC barcodes with a “declawed” :CueCat

Cue_catIn order to read the codes, I needed a barcode reader – and I had just the thing:  An old IBM :CueCat.  The :CueCat was originally intended to be used as a web-browsing aide, allowing users to “scan” barcodes that would navigate their browser to a specific site.  I know: I cannot imagine why the idea flopped.  If you want your own, I think they can still be had on eBay.

The :CueCat plugs into your USB port and pushes scanned data into your machine in much the same way as a keyboard.  I tested mine and discovered the output was encrypted – this required a hack to “declaw” the cat.

Once complete, I was quickly able to decipher codes from five pods and the cleaning disc using just the :CueCat and Notepad2.  I scanned each barcode several times to ensure that I was getting an accurate and consistent reading: (NB: I added labels to each code for clarity)


Reviewing this data, I discovered that the T-DISCs are encoded with six-digit “programs” which should correlate to programming the Tassimo’s brewing parameters of water, temperature and time.  Additionally, some interesting features of the codes were revealed:

  • Both the Latte and Cappucino T-DISCS share the same first two digits: “63”;  given that they are almost identical, this provides some insight into the purpose for the first tuple;
  • The cleaning disc’s code begins with a unique tuple, “07” – this could indicate a special code for its purpose.

Step 2:  Determining the T-DISC barcode symbology

Having decoded the data behind the T-DISC barcodes, I now needed to determine the symbology that was used to encode them.  Knowing this, I could then use a barcode generator to create my own codes for manipulating the machine.

After a bit of Googling, I came across a site I’d seen before for Wasp Barcode Technologies – they make all manner of readers, scanners and software related to barcoding.  They also have a web-based barcode generator:


Using trial-and-error, I applied each of the generator’s thirteen symbologies to the code for the Nabob Columbian coffee (642262) until I found one that matched the source bar code:  Interleaved 2 of 5.  This is a fairly common standard that’s literally applied to everything from soup to nuts.

I cross-checked the T-DISC codes with the symbology FAQ and determined that it did not include a checksum field which strengthened my earlier observation that the code’s three tuples were used to program the Tassimo’s brewing parameters directly.

Step 3:  Using the cleaning disc as a mock object

Before I could set to manipulating the Tassimo brewing codes, I needed to get some baseline measurements from the barcodes I knew.  In effect, I wanted to run some physical unit tests on the machine using known data so that I could record the volume and temperature of water dispensed and respective brew cycle time;  I’d then try to “map” these measurements back to the codes to determine correlations to “programming” the Tassimo.

Using real pods to run these “unit tests” could get expensive – fast.  So, my approach was to use the plastic cleaning disc as a mock object in conjunction with barcodes generated from the six-digit codes I scanned in from each T-DISC in Step 1.  By doing this, I could not only observe the effects of the codes on the brew parameters, but also verify that the barcodes worked and only waste water and not precious coffee in the process.

I began by using the Wasp Barcode Generator to create barcodes for each of the five beverage discs which I pasted into a Visio diagram and sized so that when printed and cut out, they would fit on to the barcode scanner in the machine.  This worked out to the convenient dimensions of 1” x 0.5”.  NB:  You can download the PDF of my worksheet here.

Mocking up a T-DISC

Mocking up a T-DISC is quite simple.  Below is a shot of the scanner in the Tassimo brew chamber where we need to place the generated barcode:


Next, here’s a sequence showing how I use the cleaning disc as a stand-in or “mock object” that simulates a Nabob Cafe Crema T-DISC.  On the left, I’m placing a generated barcode on the scanner window which will effectively override the cleaning disc’s program;  on the right, I’ve lowered the T-DISC loading tray which has the cleaning disc placed on it.  The disc is designed to allow water to flow straight through the spigot;  a real disc would retard the flow somewhat as the brewing takes place inside it before flowing through the spigot.


To run the test, I placed a pyrex measuring cup in the machine, closed the clamshell and allowed the machine to scan the code (you can see this happen through the clear plastic window) and pressed the “start” button.  I recorded the time for the brew cycle and the temperature of the dispensed water using a meat thermometer.

T-DISC Unit Test Observations

From my “unit tests”, I recorded the following observations:


Not exactly promising – while I did have more information, a pattern still wasn’t emerging to provide a positive correlation between number “x” and the three parameters.  I needed to run further tests against the machine using barcodes for extreme edge cases to try a coerce the Tassimo to reveal its programming secrets.  To this end, I created barcodes for the series “11 11 11” to “99 99 99”, with the only noticeable effect happening when I ran “55 55 55”:


Here's how the results table breaks down:

  • The "Code Numbers" column corresponds to the code digits I used to generate the barcode, indicated with the graphic in the next column for visual verification;
  • The Outcomes column indicates the Tassimo's status after reading the barcode:  "Auto" for valid programs that can be run, and "Manual" for programs that failed to run - effectively making the machine a hot water dispenser;
  • The three stages columns correspond to the steps in the brewing cycle, depending on the type of beverage;  coffees tend to have three stages, with the first being a short 3s burst of water to pre-saturate the coffee, followed by a pause of about 10-12s, then a flow of hot water for 45 or more seconds.  Other beverages will just run straight through in one stage, or have a pre-saturate stage followed by a flow stage;
  • The Water column shows the amount of water in millilitres - I did this as the machine is European in design and may use metric over imperial measures;
  • The water Temperature column shows results in degrees Faranheit; 

Conclusions and Next Steps

Over the course of this hack I managed to learn several things about how the Tassimo single-serve beverage system works by employing basic “black box” testing techniques and using the cleaning disc as a “mock object” to run simulations to record various machine behaviours.  I observed that:

  1. Each beverage T-DISC (and thus the Tassimo itself) uses a barcode that employs Interleaved 2 of 5 symbology to control how it is “brewed” according to amount and temperature of water and time;
  2. These codes translate into six-digit strings that do not include any checksums;
  3. The six-digit strings can be used to create corresponding barcodes to simulate existing beverage products;
  4. There is definitely a way to modify these codes to enhance or degrade the brewing process!

What remains is to compile more test results using additional barcodes for more edge cases to see how the machine can be manipulated.  As I gather more information, I’ll post the results – and of course, if you happen to own a Tassimo, I’d love to hear about your experiments and discoveries!

Happy hacking!


By Chris R. Chapman at November 29, 2007 03:51
Filed Under: better practices, refactoring, rnchessboardcontrol

Welcome back to the 4.5th part of my multi-partite series that traces my progress into refactoring some old code that implements a chess board control – specifically, the rules engine that referees piece movement.  Unfortunately, I’m a bit late with this as some life events intervened, and as a result I’ve had to abbreviate what I wanted to do with this installment.  Hence, version 4.5!

So far, we’ve covered a bit of ground getting the stage set for aggressive refactorings with source control, code analysis tools, unit testing and refactoring aides.  We’ve restructured our solution, moved test projects around, uploaded them to a web-based repository for sharing and devised some tests to exercise the engine across some valid and invalid moves.

Today, we’re going to look at a helper object in the engine (formerly FENAnalyzer, now FENTranslator) that is responsible for setting up the chessboard in a variety of states using a specially-formatted string.  The motivation for examining this object is to be able to test the rules engine with more complex, in-progress game states without having to run-through an entire set of moves as we’ve done with the unit tests for replaying a complete game.

Catching Up…

Before I get too far, some quick notes on changes to the source that you might notice.  If you’re following along at home, I recommend deleting your working copy and pulling down the latest version of the solution – we should be at revision 32 right now:

  1. There’s a new solution folder called Tests where I’ve re-located RNChessBoard.RNChessRulesEngine.Tests, and where all future NUnit test assemblies will be stored.
  2. RNChessRulesEngine_Tests_ClassDiagramI’ve updated the ChessRulesEngineTestFixture class using the Extract Superclass refactoring to create BaseChessRulesEngineTestFixture which allowed me to move some common methods for running the game-move replay methods up the inheritance chain and make them more widely-available.  This is one of the most common refactorings I do when writing unit tests – sometimes, I just cut out the middle-man and write a base class to start off.

    I also introduced a new method into this base class for replaying a series of multiple games using jagged-arrays called ReplayTestGameBattery, and added some new replay arrays for testing black and white pawn moves (BlackPawnTestMoves and WhitePawnTestMoves).
  3. Renamed FENAnalyzer to FENTranslator to better reflect its dual purpose (ie. constructs and interprets FEN strings)
  4. Did some minor renaming and cleaning up of the solution after moving things around – you’ll see this under the imaginative comments of “clean up on aisle x”.

Building new tests with Forsyth-Edwards Notation

While crafting the test arrays for the black and white pawn openings, I found myself wanting to construct mid-game scenarios to enable faster, more precise testing – say, for an emerging en-passant capture.  As it stands, I can only do this by constructing an array of sequential moves to replay the scenario – I wanted to jump right in with a preset board state.

Fortunately, I had already built this exact kind of functionality into the rules engine oh-so-long ago with a helper object called FENTranslator.  This class implements an engine that translates a simple string in Forsyth-Edwards Notation (FEN) into a basic array that the rules engine can then use to set piece positions and game states;  conversely, it can also create a FEN string from a current board set up.  Here’s how it’s used in the WinForms control to set the default start game position:


Quick FEN Primer

FEN strings are comprised of six space-delimited fields that represent the current position of pieces on the board, side to play, castling availability, en passant state and half and full moves.  The first field defines the eight ranks of the chessboard, with the black side pieces in lower-case letters (rook, knight, bishop, queen, king and pawn) and the white in upper-case;  squares that have no pieces are represented by numbers.  The FEN notation for the start of a game would be:

rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1

For example, after the typical white-side queen’s pawn opening, the FEN notation would be:

rnbqkbnr/pppppppp/8/8/3P4/8/PPP1PPPP/RNBQKBNR b KQkq - 0 1

Black’s response with his king-side knight to F6:

rnbqkb1r/pppppppp/5n2/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 2

The “KQkq” markers indicate whether either side has used their sole opportunity to castle king-side or queen-side.  For example, if white castles king-side, here’s how the board state would be represented in FEN:

rq2kbnr/pppbpppp/2n5/3p4/2BP4/4PN2/PPP2PPP/RNBQ1RK1 b kq - 0 5

Note that white’s indicator is removed, while black’s remains.  In order to better see how the curtrent board’s FEN changes, I’ve added an additional textbox to our test app that calls the WinForm control’s GetCurrentPositionAsFEN() method, which in turn uses the FENAnalyzer object to generate the string:


Testing the FENTranslator Object

This turned out to be a little easier than I thought:  Initially, I was quite concerned about the edge and corner cases  that this object might present since the typical FEN string has a lot of potential to be malformed.  As it turns out, I accounted for this with some regular expressions.

First Tests – Not Null and Default States

I began my testing by creating the FENTranslatorTestFixture and did the routine Assert.IsNotNull check, and then followed with a more ambitious test containing several assertions to validate the default state for the FENTranslator object.  This object can be instantiated one of two ways, with either a FEN string or an array.  To begin, I needed to supply the the object with a 64–element integer array representing the start state of the board:

   27 private static readonly int[] START_BOARD_ARRAY = new int[64]
   28     {
   29         -4,-2,-3,-5,-6,-3,-2,-4,
   30         -1,-1,-1,-1,-1,-1,-1,-1,
   31         0,0,0,0,0,0,0,0,
   32         0,0,0,0,0,0,0,0,
   33         0,0,0,0,0,0,0,0,
   34         0,0,0,0,0,0,0,0,
   35         1,1,1,1,1,1,1,1,
   36         4,2,3,5,6,3,2,4
   37     };

Negative numbers represent the black pieces, positive numbers white and zeroes indicate an empty square – in practice, the rules engine maintains the current board state with a similar array, so this is a practical setup step.  My test, New_ValidFENString_NotNull instantiates the FENTranslator with this array and checks each of the object’s default states to ensure they are correct:

   57 [Test]
   58 public void New_ValidFENString_NotNull()
   59 {
   60     FENTranslator  fen = new FENTranslator (FEN_START_WHITE);
   62     Assert.IsNotNull(fen, "FENBuilder failed to instantiate.");
   63     Assert.IsTrue(fen.WhiteKingsideCastle, "Initial white king-side castle state is invalid.");
   64     Assert.IsTrue(fen.WhiteQueensideCastle, "Initial white queen-side castle state is invalid.");
   65     Assert.IsTrue(fen.BlackKingsideCastle, "Initial black king-side castle state is invalid.");
   66     Assert.IsTrue(fen.BlackQueensideCastle, "Initial black queen-side castle state is invalid.");
   68     Assert.AreEqual(1, fen.FullMoveNumber, "Initial full move number is invalid.");
   69     Assert.AreEqual(0, fen.HalfMoveClock, "Initial half-move number is invalid.");
   70     Assert.AreEqual(PlayerTurn.White, fen.ActiveColor, "Initial player turn is invalid.");
   72     Assert.IsNotEmpty(fen.ChessBoardArray, "Initial chessboard array property is invalid.");
   73     Assert.AreEqual(START_BOARD_ARRAY, fen.ChessBoardArray, "Initial chessboard array state is invalid.");
   75     Assert.AreEqual("-", fen.EnPassantTargetSquare, "Initial En Passant target square is invalid.");
   76 }

I added and tested each Assert individually to check my board state – they all passed, so I now had a good baseline regression test that I could move forward from.  Next, I decided to focus on tests that instantiated the object with a FEN string as I thought there would be a lot more work to do here.  To keep things brief, I’m going to review just a few of the tests – you can pull down the source to see more detail and my thinking process at the time.

Next Test – Inverse Relationships with ToString()

One of Hunt & Thomas’ unit testing maxims suggests that you should be able to validate your code by checking inverse relationships.  In other words, is there a way to validate an object’s methods using its output as an input – in our case, this could be done by instantiating the object with the array used in New_ValidFENString_NotNull and call the object’s ToString() implementation:

   18 private const string FEN_START_WHITE = "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1";
   79 public void ToString_ValidStartFEN_Equality()
   80 {
   81     FENTranslator  fen = new FENTranslator (FEN_START_WHITE);
   82     Assert.AreEqual(FEN_START_WHITE, fen.ToString(), "Invalid FEN string was returned.");
   83 }

Line 18 shows the string constant I’m using to test for a valid default FEN string;  by calling ToString() on a freshly-instantiated FENTranslator object, there should be no difference – and indeed this test passes.  Of course, this test does suggest that we need to look at some edge cases for instantiation using an array - we’re placing a lot of trust in the ToString() method – but this will do as a backstop for the moment.

Expected Exceptions Tests

A good portion of the tests I added address concerns around instantiating the object with a malformed FEN string.  To support these tests, I un-commented some lines of code in the ImportFEN() method to throw an ArgumentException when it detects fewer than six space-delimited fields:

  205 string[] fenFields = FENString.Split(new char[] {' '},6);
  206 if(fenFields.GetLength(0) < 6)
  207 {
  208     throw new ArgumentException
  209     ("FEN input string does not contain the required six space-delimited fields.");
  210 }

I tested for this using the ExpectedException attribute for my unit test:

   85 [Test,ExpectedException(typeof(System.ArgumentException))]
   86 public void New_BadFENString_NoSpaceDelimiters_ArgumentException()
   87 {
   88     string testFEN = "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNRwKQkq-01";
   89     FENTranslator  fen = new FENTranslator (testFEN);
   90 }

But this isn’t quite right – it’s not so much the argument that’s the exception, but the format.  Accordingly, I modified the throw on line 208 to use a FormatException instead and updated my tests.

Next, I did some minor refactorings within the PopulateBoardArrayFromFEN() method to throw a FormatException when presented with a malformed FEN rank string - things weren’t nearly as expressive or complete as I wanted.  So, we went from this:

  261 int sqIndex=0;
  262 for(int rankIndex=0; rankIndex<fenRanks.GetLength(0); rankIndex++)
  263 {    
  264     string rank = fenRanks[rankIndex];
  265     if(!CheckFENRank(rank))
  266     {
  267         throw new ApplicationException
  268           ("The FEN piece placement field contains an incorrect rank string:\n" +
  269            "[" + rank + "]");
  270     }

to this:

  271 int sqIndex=0;
  272 for(int rankIndex=0; rankIndex<fenRanks.GetLength(0); rankIndex++)
  273 {    
  274     string rankString = fenRanks[rankIndex];
  275     if(!IsFENRankStringValid(rankString))
  276     {
  277         string rankException = "The FEN string at rank {0} is not well-formed: [{1}]";
  278         throw new FormatException(string.Format(rankException, rankIndex, rankString));
  279     }

I tested this fragment change in several tests, eg. New_BadFENString_BadBlackRank8_abcdefgh_FormatException, New_BadFENString_InvalidRank3_SpaceCount9_FormatException, New_BadFENString_BadBoardDelimiters_FormatException, etc.

Of course, this led to refactoring the IsFENRankStringValid – just to tighten things up a little, from this:

  446 private bool CheckFENRank(string FENRankString)
  447 {
  448     // Analyze FEN rank for space count
  449     MatchCollection mcSpaces = Regex.Matches(FENRankString,"[1-8]");
  450     int spaces=0;
  451     foreach(Match m in mcSpaces)
  452     {
  453         spaces+=int.Parse(m.Value);
  454     }
  455     if(spaces>8) return false;

to this:

  455 private bool IsFENRankStringValid(string FENRankString)
  456 {
  457      // Parse the FEN string ranks for empty squares
  458      MatchCollection mcEmptySquares = Regex.Matches(FENRankString,"[1-8]");
  459      int emptySquareCount = 0;
  460      foreach (Match m in mcEmptySquares)
  461        emptySquareCount += int.Parse(m.Value);
  463      if(emptySquareCount > 8) return false;

Incidentally, this method does all the heavy-lifting for validating the FEN string ranks that I was concerned about – by leveraging the Matches method, I can iteratively apply a pattern against a string which makes validating the FEN empty squares and piece positions a snap.

Next Steps:  More edge tests, boundary tests, state tests

So far, we’ve made some good progress testing the FENTranslator object in this post, and learned a little about Forsyth-Edwards Notation along the way, but there’s obviously more to do.  As I’ve mentioned above, there’s room to ensure the values boundaries for the test arrays are within acceptabled ranges (ie, -6 to 6), that the FEN strings for castling and en passant are working, etc.  Ultimately, though, the FENTranslator object wasn’t intended to be an engine as much as a helper class, which I need to keep in mind as I work through refactoring it.

Once I’ve added enough edge/corner case tests to bolster my confidence in the FENTranslator, I can return to constructing additonal regression tests for validating piece movements and their outcomes in the rules engine before moving into any refactoring of that code.  While this may all seem a bit tedious, it is necessary to ensure we don’t make any fatal changes that would break the engine.

As always, you can obtain the latest release of the codebase from to see what’s been accomplished so far and to monitor progress as we go along.  Cheers!

By Chris R. Chapman at November 27, 2007 04:27
Filed Under: agile, amuse

It’s funny in an ironic and so very, very wrong kind of way.


By Chris R. Chapman at November 22, 2007 04:07
Filed Under: .net, better practices, tools

Via dotNetSlackers, an all-too-familiar story: If only we'd used ANTS Profiler earlier... we would have had a shot at the $2 million cash prize!.  The familiar part is how code profiling seems to always be a last consideration for developers – not the $2M part, obviously!

Written by a member of the Princeton DARPA Grand Challenge Team (the competition where entrants build driverless vehicles that must successfully negotiate a pre-set course), the post describes how an apparent “memory leak” in their .NET code was causing their test vehicle to literally crash:

We were unique among the teams in the finals, in that we used stereo vision, as opposed to scanning lasers, to detect and range obstacles. All in all, we wrote 10,000 lines of C# code to drive the cars.

In the finals, we ran for 9 miles before succumbing to a memory leak in the obstacle-detection code. Actually, most of our code is written in garbage-collected C#, so it wasn't a memory leak per se, but it wasn't until two weeks later that we discovered the true problem.

The nature of the problem?  Objects that while “deleted”, still maintained subscriptions to events, thus causing their heap to blow chunks.  How did they figure this out?  By profiling their code to see how it was working in real-time:

One of our team members downloaded the 14-day trial of ANTS Profiler, and we ran it on our car's guidance code. We profiled the memory usage and saw the obstacle list blowing up. How could this be? We called "delete" on those old obstacles! To our amazement, it was only minutes before we realized that our list of detected obstacles was never getting garbage collected. Though we thought we had cleared all references to old entries in the list, because the objects were still registered as subscribers to an event, they were never getting deleted.

We added one line of code to remove the event subscription and, over the next three days, we successfully ran the car for 300 miles through the Mojave desert.

Definitely a testament to the power of profiling!

Personally, I prefer AQTime to ANTS as the latter in my experience doesn’t profile SharePoint applications very well – if at all.  It also has an array of reports and dashboards that I like, eg. callgraphs, top-10 hit counts, longest methods to execute (with and without children) and more.  Nonetheless, it’s well worth trying the free versions of each on your own code to see how they work and what you prefer. 

To me, profiliers are the best investment you can make in your tools, outside of an IDE and refactoring aids.


By Chris R. Chapman at November 22, 2007 01:58
Filed Under:


Each has links for the 32 and 64 bit releases – get ‘em while they’re hot.

About Me

I am a Toronto-based software consultant specializing in SharePoint, .NET technologies and agile/iterative/lean software project management practices.

I am also a former Microsoft Consulting Services (MCS) Consultant with experience providing enterprise customers with subject matter expertise for planning and deploying SharePoint as well as .NET application development best practices.  I am MCAD certified (2006) and earned my Professional Scrum Master I certification in late September 2010, having previously earned my Certified Scrum Master certification in 2006. (What's the difference?)