Leftshift’s Weblog

Techniques to improve your code

Plain Text Story Runner using RSpec

At work we have started looking at the use of automated accpetance tests to provide validation of our development efforts. There are quite a few tools out there that can help with this; FitNesse, RSpec, NSpec, Selenium, Watir and WatiN to name a few.

So far we have found most success with RSpec and Watir. RSpec allows us to create plain text stories and acceptance criteria and run these as part of our integration builds. The slight problem we have is that we are mainly a .NET shop so using Ruby poses a few problems with regards to experience and knowledge. Unfortunately NSpec [which incorporates NBehave] doesn’t currently have a plain text story runner. Anyway here’s how you go about setting up RSpec in three simple steps.

Step 1: Write your story and acceptance criteria and save in a plain text file

Step 2: Create the story interpreter

Step 3: Create the runner

So without further ado I’ll explain each step in detail: Continue reading

Advertisements

26 June 2008 Posted by | Automation | , , , | Leave a comment

miniSPA 2008

I’ll be attending this years miniSPA conference. It’s billed as a free sampler of the best sessions from the full on SPA held earlier this year. In case you are wondering SPA stands for Software Practice Advancement.

24 June 2008 Posted by | Events | | Leave a comment

Inventive uses of NUnit #1

The first in a series of articles that describe how you can go about using a testing framework in an ‘interesting’ way. First up is the shopping list.

NUnit Shopping List

What could be easier than updating a line of code, recompiling and running the test suite whilst out shopping?

A Test for Cheese

23 June 2008 Posted by | .NET | , | Leave a comment

BBC Hackday – Mashed 08

This weekend the BBC have arranged a follow up event to last years hackday. Mashed 08 is taking place at Alexandra Palace in North London and looks like it will be great fun. Have a look at the schedule and here for a quick overview.

It is now all over. Have a look here to see how it all went.

18 June 2008 Posted by | Events | | Leave a comment

TDT

We have been running coding dojos once a week after work for the last couple of months. They have proved a valuable learning resource and people have been attending even though we run them after work. They focus on one aspect of technology and past topics include implementing UML class diagrams, Ruby on Rails 101 and OO Principles.

One technique I have found valuable for running a session is to prepare a set of failing tests beforehand. I recently ran a dojo on LINQ and Lambdas. I started with a test for the simplest possible LINQ expression I could think of. I made the test pass [using the standard red, green, re-factor]. I then removed the code and stored it inside a Visual Studio Snippet for quick retrieval during the dojo. At dojo time, I gave a brief intro to LINQ and the project structure I provided. And then one by one talked through the tests and got the attendees to implement the code to make the test pass. This keeps people focused on a single problem and keeps scope down to the knowledge you wish to get across. The feedback I received post dojo was positive. I call this approach Test Driven Teaching [TDT] and I would definitely encourage you to try it if you are thinking of running your own dojo.

12 June 2008 Posted by | Coaching | , | Leave a comment

Consistency Is Key

I’m sure I am not the only one who puzzles over Microsoft’s naming conventions for C#. I understand the Pascal and camel casing rules, but what I don’t get is the exception made for two letter acronyms. I’m looking at creating a dictionary of my code so that I can start to see patterns and opportunities for re-use. To do this I need to be able to break up the name of a method or class into its constituent parts. Regular expressions seemed like a good way to go. It is fairly trivial to write a regex that matches the parts that make up a Pascal and camel cased identifier. What makes this tricky is the exception for short acronyms. Here is my attempt:

Regex splitName = new Regex(@”^I(?=(\p{Lu}{2})*(\p{Lu}\p{Ll}|$))|(^\p{Ll}+|\p{Lu}\p{Ll}+|\p{Lu}{2})”);

It breaks down like so: Continue reading

10 June 2008 Posted by | .NET, Code Quality | , | Leave a comment

DDL Hell

There are several tools that you can use to help track database changes, so that come release you avoid an error or omission in the database scripts that brings your system crashing to its knees. However keeping change scripts up to date requires a lot of discipline. Tools such as Redgate’s SQL Compare can validate that one database schema matches another and produce change scripts to align them. This is all useful stuff, but some operations such as renaming tables and columns cannot be picked up using this technique. As a result I mainly use these types of tool to verify changes have been successfully applied. You are still left with the problem that a lack of discipline can mean cancelling the release. There is another method you can use in conjunction to the comparison tools however. When I first heard about the ability in SQL 2005 to capture DDL [Data Definition Language] changes I immediately thought about how I could apply the feature to the problem of tracking DB changes.

I got hold of an example script from here and modified it like so:

Continue reading

30 May 2008 Posted by | Automation | , , | 3 Comments

Classic Mistakes

Interesting findings on classic mistakes in software can be found here.

As always use the DRY principle to avoid them!

28 May 2008 Posted by | Smells | , | Leave a comment

CITCON 2008

I’ve just booked my space at CITCON Europe 2008

CITCON is a openspace event looking at all aspects of CI and Testing.

14 May 2008 Posted by | Continuous Integration | | Leave a comment

The Complexity Implementation Tangle

Using relative complexity in your estimation is quite an effective estimating technique. To do this you would normally associate a complexity score to each feature or story. The scale a lot of people used is based on the fibonnaci sequence where next n=previous (previous (n)) + previous (n). You start with 0 and 1 to obtain the following sequnce 1,2,3,5,8,13 etc. You assign points based on the relative complexity where a 2 point story would be roughly twice as complicated as a 1 point story. The power of this approach comes from the comparison – we seem to be naturally better at this than assigning absolute values.

If you take this approach it is very useful to measure how long each story took to implement. With this information in hand you can draw a picture similar to the following

On the top side of the diagram you have the complexity points, scaled appropriately. On the bottom side of the diagram you have implementation time again scaled appropriately. If you plot all of the features on the bottom in the correct position with regards to the time to implement them and draw a straight line back to the complexity point estimate [I know my picture above doesn’t use straight lines, I’ll update the post with a better example] you’ll end up with something along the lines of the diagram above.

In an ideal world all of the features that were estimated as 1 complexity point will have taken you less time to develop than those you estimated as 2 complexity points and so on. Where the lines tangle [cross each other] suggest where your estimation could be improved. Your retrospective can be used to address such issues.

13 May 2008 Posted by | Metrics | , , | Leave a comment