Leftshift’s Weblog

Techniques to improve your code

DDL Hell

There are several tools that you can use to help track database changes, so that come release you avoid an error or omission in the database scripts that brings your system crashing to its knees. However keeping change scripts up to date requires a lot of discipline. Tools such as Redgate’s SQL Compare can validate that one database schema matches another and produce change scripts to align them. This is all useful stuff, but some operations such as renaming tables and columns cannot be picked up using this technique. As a result I mainly use these types of tool to verify changes have been successfully applied. You are still left with the problem that a lack of discipline can mean cancelling the release. There is another method you can use in conjunction to the comparison tools however. When I first heard about the ability in SQL 2005 to capture DDL [Data Definition Language] changes I immediately thought about how I could apply the feature to the problem of tracking DB changes.

I got hold of an example script from here and modified it like so:

Continue reading

30 May 2008 Posted by | Automation | , , | 3 Comments

Classic Mistakes

Interesting findings on classic mistakes in software can be found here.

As always use the DRY principle to avoid them!

28 May 2008 Posted by | Smells | , | Leave a comment

CITCON 2008

I’ve just booked my space at CITCON Europe 2008

CITCON is a openspace event looking at all aspects of CI and Testing.

14 May 2008 Posted by | Continuous Integration | | Leave a comment

The Complexity Implementation Tangle

Using relative complexity in your estimation is quite an effective estimating technique. To do this you would normally associate a complexity score to each feature or story. The scale a lot of people used is based on the fibonnaci sequence where next n=previous (previous (n)) + previous (n). You start with 0 and 1 to obtain the following sequnce 1,2,3,5,8,13 etc. You assign points based on the relative complexity where a 2 point story would be roughly twice as complicated as a 1 point story. The power of this approach comes from the comparison – we seem to be naturally better at this than assigning absolute values.

If you take this approach it is very useful to measure how long each story took to implement. With this information in hand you can draw a picture similar to the following

On the top side of the diagram you have the complexity points, scaled appropriately. On the bottom side of the diagram you have implementation time again scaled appropriately. If you plot all of the features on the bottom in the correct position with regards to the time to implement them and draw a straight line back to the complexity point estimate [I know my picture above doesn’t use straight lines, I’ll update the post with a better example] you’ll end up with something along the lines of the diagram above.

In an ideal world all of the features that were estimated as 1 complexity point will have taken you less time to develop than those you estimated as 2 complexity points and so on. Where the lines tangle [cross each other] suggest where your estimation could be improved. Your retrospective can be used to address such issues.

13 May 2008 Posted by | Metrics | , , | Leave a comment

The Chicken Dance

Well, that’s what it sounds like when my antipodean friends talk about the process of checking in code. Seriously though, development is a complex beast as you realise when you start talking in detail about any of the constituent tasks that a developer is responsible for. Take developers testing responsibilities as an example [which I conveniently ran a workshop on last week]. These break down into the two broad areas of verification and validation.

A workshop to figure these things out is a very useful tool and one that I would recommend at your workplace. It is truly incredible the depth of coverage you can get in a topic with ten people sitting around the table. We spent 5-10 minutes where everybody wrote down each responsibility or task they could think of on a separate post-it. We went through a whole pack of post-it notes [admittedly with a bit of duplication]. We then took it in turns to stick the post-its on the board, roughly grouping and recognising duplicates as we did so. I’m currently busy writing these up for our internal wiki. Making developers aware of the broad range of testing tasks and supporting this with automation of acceptance tests, unit tests etc. will really help improve the quality of the code delivered to the QA teams and eventually the customer. Ideally everybody will pick up on this and you will end up with a zero defect culture.

12 May 2008 Posted by | Code Quality, Metrics | , | Leave a comment

Standards and Guidelines are Useless

That is quite a bold statement I just made, but bear with me. I’ve come to the opinion that all standards, guidelines and best practices are useless if they only exist in a document somewhere. Invariably this document gets lost down the back of the enterprise sofa weeks after its initial distribution. Developers stop refering to it, it goes out of date and new developers are unaware of its existence.

The good news is that this problem can be solved quite simply. Automation is the answer and it places even more importance in having a sensible CI [Continuous Integration] strategy. All of the things that are truly important to the quality of your code can be automated. Standards and guidelines can be checked. By integrating these checks into your build platform developers receive regular feedback that they are more likely to act upon. Nightly builds can be set up to provide a full suite of information about adherence to standards and guidelines along with build health. These can include for example the standard stuff such as unit test coverage alongside naming convention, web accessibility & code readability checks. When the standard or guideline is changed developers get feedback about this at worst by the next day.

By eliminating waste in the feedback loop and ensuring that code is tested against the standards in place, you will quickly start to see the real effect of these standards and take an agile approach to evolving them to improve your codebase.

7 May 2008 Posted by | Code Quality, Metrics | , , | 1 Comment

The 56 Complexity Point Dash

Tracking your progress is traditionally done through the use of burn up / burn down charts. I’d like to suggest an alternative – The Sprint.

A Sprint Chart
As you can see it resembles a real race where the length of the course is the sum of the complexity points you intend to deliver for that sprint.

The running line up is:

Lane 1 [Green] : The Pacemaker. Each day she runs the same amount and crosses the finish line at the end of the sprint.

Lane 2 [Red] : You / your team. Every time you complete a feature you can move your runner forward by that amount.

Lane 3…N [Blue and Yellow] : You / your team for the last N sprints. Progress as measured in a previous sprint.

If this information is captured every day, it’d be fairly trivial to track your progress in a visual manner. Simply printing each day and flipping the pages would give you an animated view of your progress throughout the sprint. You can see how far off the pace you are and how you are doing compared to previous sprints. You could even create an animation and send it to the team each day.

1 May 2008 Posted by | Metrics | , | Leave a comment