Leftshift’s Weblog

Techniques to improve your code

MiniSPA Roundup

I attended the miniSPA conference yesterday at the BCS offices near Covent Garden in London. SPA stands for Software Practice Advancement and “is where IT practitioners gather to reflect, exchange ideas and learn.”

miniSPA is billed as a free taster for the full event, where you can experience the most popular sessions. The real thing lasts for three days [and nights!] rather than just the one.

After the usual intro, there was a choice of two streams for the three sessions run during the day.

My first choice was deciding between Best Practices for Finding your way into new Projects – quickly… and Getting To ‘No’. I plumped for finding your way into new projects. After Marina had set the scene and everyone had introduced themselves, we brainstormed ideas for practices we thought would be useful if starting on a new project. Everybody stuck their post-its on the wall under one of the five categories. The categories were Team / Culture, Me and Myself, Me and Others, Me and My Computer and another that I cant recall right now. Unfortunately due to the time constraints we were unable to review the output. However I think Marina is going to post the output somewhere which will be worth examining. Lastly each table carried out a metaphorical exercise where we had to pick a scenario outside of the IT world and map the activities back onto an IT project. Our table came up with a prison break. The effect of selecting a topic completely outside the normal domain was interesting. As everybody was essentially a novice the fear of making a mistake was eliminated. This allowed the participants to brainstorm in a very uninhibited way, and to have some fun. It was actually surprisingly easy to map most activities back on to the world of the IT project. It turns out escaping from gaol lends itself to an agile approach!

Next up was Awsome Acceptance Testing or Storyboarding the Domain Model. I went for the Awesome Acceptance Testing. Unfortunately Dan North couldn’t make it which was a shame. He was replaced by Simon, one of Joe’s colleagues from Google. The session was an introduction level talk describing acceptance testing. Not so much interaction in this one, even though a few times people did ask questions that led to some interesting diversions. I didn’t learn a whole lot from the session, but it did give me the confidence we are doing the right thing and are probably ahead of the curve where I work.

The last choice of the day was a toss-up between Domain-Specific Modelling for Full Code Generation and Effective Pairing. Not having any strong feelings about code generation I chose the Effective Pairing session. Normally the practice of pair working is described in terms of programmers [pair programming]. This session looked to see how effective a practice pairing is in other domains. Everybody lined up in order of how much they liked or practiced pairing. Teams were split up into groups of 5, with each team having somebody from the front, middle and back of the line to ensure some balance. Being teams of five, one person from the team worked alone to give adversarial feedback. Pairs and singles swapped after each exercise. There were four pairing exercises that were allocated among the teams, but again being constrained by time each team only got to try two of them. The four themes for the pairing exercises were creative, analytic, visual / spatial / physical and people centric. I did the creative and analytic exercises. The creative exercise consisted of describing a painting to a blind person in at least 200 words. Working in a pair I found that more detail was picked up than I would have done individually. There was almost an analysis / brainstorming phase at first where we discussed various aspects of the painting. This was very effective in a pair. However, actually writing the prose was less so. For the analytic exercise I was working alone. Whilst I was able to direct what I did, I certainly missed some speed and accuracy that the pairs reported back. I found this session interesting, in that it informed me of the feeling when the payoff line for pairing was crossed. In general pairing is a great practice and this certainly helped encourage me to use it where possible and to know at what point to stop pairing.

Overall miniSPA was a well organised, enjoyable and interactive conference. I definitely recommend attending the real thing next year.

13 July 2008 Posted by | Events | , , , | Leave a comment

Plain Text Story Runner using RSpec

At work we have started looking at the use of automated accpetance tests to provide validation of our development efforts. There are quite a few tools out there that can help with this; FitNesse, RSpec, NSpec, Selenium, Watir and WatiN to name a few.

So far we have found most success with RSpec and Watir. RSpec allows us to create plain text stories and acceptance criteria and run these as part of our integration builds. The slight problem we have is that we are mainly a .NET shop so using Ruby poses a few problems with regards to experience and knowledge. Unfortunately NSpec [which incorporates NBehave] doesn’t currently have a plain text story runner. Anyway here’s how you go about setting up RSpec in three simple steps.

Step 1: Write your story and acceptance criteria and save in a plain text file

Step 2: Create the story interpreter

Step 3: Create the runner

So without further ado I’ll explain each step in detail: Continue reading

26 June 2008 Posted by | Automation | , , , | Leave a comment

The Chicken Dance

Well, that’s what it sounds like when my antipodean friends talk about the process of checking in code. Seriously though, development is a complex beast as you realise when you start talking in detail about any of the constituent tasks that a developer is responsible for. Take developers testing responsibilities as an example [which I conveniently ran a workshop on last week]. These break down into the two broad areas of verification and validation.

A workshop to figure these things out is a very useful tool and one that I would recommend at your workplace. It is truly incredible the depth of coverage you can get in a topic with ten people sitting around the table. We spent 5-10 minutes where everybody wrote down each responsibility or task they could think of on a separate post-it. We went through a whole pack of post-it notes [admittedly with a bit of duplication]. We then took it in turns to stick the post-its on the board, roughly grouping and recognising duplicates as we did so. I’m currently busy writing these up for our internal wiki. Making developers aware of the broad range of testing tasks and supporting this with automation of acceptance tests, unit tests etc. will really help improve the quality of the code delivered to the QA teams and eventually the customer. Ideally everybody will pick up on this and you will end up with a zero defect culture.

12 May 2008 Posted by | Code Quality, Metrics | , | Leave a comment