Leftshift’s Weblog

Techniques to improve your code

Quality in the Real World

We have been producing code quality dashboards at work. They are produced by our CI server whnever a successful build is produced. This allows developers to get quick feedback on the quality of code they have just commited. We can compare projects and raise their quality over time. This is all very well and good but wouldn’t it be nice to see how we are doing compared to the outside world?

So without further aso I present to you 4 code quality dashboards for a selection of .NET open source projects [and one MS one!]

As you can see quality is variable with nmock and ASP.NET MVC doing well. Fifty percent unit test coverage for nunit is a bit shocking however!

Advertisements

11 September 2008 Posted by | Code Quality, Metrics | , , , , | 1 Comment

The Ultimate Code Smell

Bob Martin has been thinking about adding a new element to the agile manifesto around producing quality rather than quantity. He’s described this as ‘Craftmanship over Execution’. To back this up you can follow the instructions here and measure the amount of WTF’s per minute. A great idea for a metric, but hard to automate. Maybe an idea for a new startup; provide metrics for code in the same way third party companies perform penatration testing.

1 September 2008 Posted by | Code Quality, Metrics | , , | Leave a comment

Wordle Up

I’m off on holiday for the next two weeks so before being largely disconnected I thought I’d point you in the direction of wordle. It’s a tag cloud creator and works based on an RSS or ATOM feed, or any text you paste in. Here is the cloud for this blog

Tag cloud of leftshift.wordpress.com

As you can see it produces very nice images and the fonts, colours and layout are all customizable. I think I might start adding a tag index to all docs I produce to give people a visual clue of the content.

Happy coding and see you in a couple of weeks.

15 August 2008 Posted by | Misc. | , | Leave a comment

Quality Testing

As part of a continuous integration cycle most people consider running unit and integration tests. Some even consider running automated acceptance tests. Fewer still focus on code quality tests. To ensure code is maintainable requires a certain amount of effort as the code changes. I think this is what the refactor stage of the TDD red, green, refactor cycle alludes to. As well as refactoring code to remove duplication, there are other considerations to be made with regards to maintainability. We use six indicators to give a finger in the air estimate of the maintainability of a code base. The indicators we use are as follows:

Unit Test Coverage High test coverage is a good indicator of whether a TDD approach is being followed, and if not an optimistic percentage of the chance of a bug being caught. Said another way, If a bug is introduced into the code the chance of it being caught is at best the percentage of code covered by tests. This very much depends on the quality of the tests, but if you only have fifty percent coverage and introduce a bug, it’s a coin toss whether it’s detected. If the tests are poor the real figure is much lower than fifty percent.

Percentage of large methods Fairly obvious this one, but large methods are harder to maintain because they contain more code. There is more scope for error, less accuracy for identifying the cause of any error [any unit test covers more code] and a greater chance that the method is breaking the single responsibility principle giving it more than one reason to change. What you consider a large method is up to you, but we have been using ten lines of code as our measure.

Class Cohesion For a class to be cohesive all methods should use all fields. We use the lack of cohesion of methods henderson-sellers formula to measure this one. If a class isn’t cohesive it’s an indicator that unrelated functionality could be split into it’s own class. In other words it has more than one reason to change and is therefore breaking the single responsibility principle.

Package Cohesion For a package or assembly to be cohesive the classes inside the package should be strongly related. This is a measure of the average number of type relationships within a package. Low cohesion suggests that the types can be split into seperate packages.

Class Coupling This is a measure of the number of types that depend on a particular type a.k.a. afferent coupling. If a high number of types depend on the class in question, making changes to it will be hard without breaking lots of client code. There are a number of reasons why this might occur. Responsibility for one aspect may be split among multiple classes, but more likely you don’t have a losely coupled design.

Package Coupling This is a measure of the number of types outside this package or assembly that depend upon types within this package. One possible reason for high coupling is a packaging problem – things that change together should stay together. Another reason is that the packages in question have many responsibilities.

I’d love to hear feedback on the way you measure the maintability of code.

12 August 2008 Posted by | Code Quality, Metrics | , , | Leave a comment

Mono Cecil, Visited and Observed

As I mentioned in a previous post, Mono Cecil is library that lets you load and browse the types of a .NET assembly. For a simple [but potentially useful] look at what you can do I’ll show you how you might go about listing all of the methods in an assembly. What will our test look like? Well, we will start with the assertion that the number of methods returned is what we expect:

[Test]
public void ShouldReturnNumberOfMethodsTest()
{
    Assert.AreEqual(expectedMethodCount, actualMethodCount);
}

Pretty simple so far, but we have a couple of design decisions to make to complete our test code. I’ll call our class that does the work AssemblyExaminer. It will need to expose a list of methods so I can get the count. It will also need to be passed an assembly as input. To avoid subtle state related bugs we will pass this in the constructor and make our class immutable. Looking at the Mono.Cecil namespace the AssemblyFactory.GetAssembly method has three overloads. One takes a filename, one a byte array and the other a stream. For our purposes a stream provides us with the best level of abstraction. With all that in mind here is our [almost] completed unit test:

[Test]
public void ShouldReturnNumberOfMethodsTest()
{
    const int expectedMethodCount = ?;
    using (Stream testAssembly = GetTestAssembly())
    {
        AssemblyExaminer examiner = new AssemblyExaminer(testAssembly);
        int actualMethodCount = examiner.Methods.Count;
        Assert.AreEqual(expectedMethodCount, actualMethodCount);
    }
}

Two things remain. One is simply replacing the ? with the number of methods I expect to find. The other is spinning up the stream containing the assembly. I have called the method that creates the stream GetTestAssembly. We’ll see how to implement that in a minute. Continue reading

18 July 2008 Posted by | .NET, Metrics | , | 2 Comments

Inventive uses of NUnit #2

Number two in a series of posts investigating the more ‘creative’ uses of NUnit. This time around we look at the school register.

NUnit test for a class register

NUnit test for a class register

With collective ownership you could get the kids to update their own unit test. I’m sure that would work and nobody else would make any other assertion in the code:]

14 July 2008 Posted by | .NET | , | Leave a comment

MiniSPA Roundup

I attended the miniSPA conference yesterday at the BCS offices near Covent Garden in London. SPA stands for Software Practice Advancement and “is where IT practitioners gather to reflect, exchange ideas and learn.”

miniSPA is billed as a free taster for the full event, where you can experience the most popular sessions. The real thing lasts for three days [and nights!] rather than just the one.

After the usual intro, there was a choice of two streams for the three sessions run during the day.

My first choice was deciding between Best Practices for Finding your way into new Projects – quickly… and Getting To ‘No’. I plumped for finding your way into new projects. After Marina had set the scene and everyone had introduced themselves, we brainstormed ideas for practices we thought would be useful if starting on a new project. Everybody stuck their post-its on the wall under one of the five categories. The categories were Team / Culture, Me and Myself, Me and Others, Me and My Computer and another that I cant recall right now. Unfortunately due to the time constraints we were unable to review the output. However I think Marina is going to post the output somewhere which will be worth examining. Lastly each table carried out a metaphorical exercise where we had to pick a scenario outside of the IT world and map the activities back onto an IT project. Our table came up with a prison break. The effect of selecting a topic completely outside the normal domain was interesting. As everybody was essentially a novice the fear of making a mistake was eliminated. This allowed the participants to brainstorm in a very uninhibited way, and to have some fun. It was actually surprisingly easy to map most activities back on to the world of the IT project. It turns out escaping from gaol lends itself to an agile approach!

Next up was Awsome Acceptance Testing or Storyboarding the Domain Model. I went for the Awesome Acceptance Testing. Unfortunately Dan North couldn’t make it which was a shame. He was replaced by Simon, one of Joe’s colleagues from Google. The session was an introduction level talk describing acceptance testing. Not so much interaction in this one, even though a few times people did ask questions that led to some interesting diversions. I didn’t learn a whole lot from the session, but it did give me the confidence we are doing the right thing and are probably ahead of the curve where I work.

The last choice of the day was a toss-up between Domain-Specific Modelling for Full Code Generation and Effective Pairing. Not having any strong feelings about code generation I chose the Effective Pairing session. Normally the practice of pair working is described in terms of programmers [pair programming]. This session looked to see how effective a practice pairing is in other domains. Everybody lined up in order of how much they liked or practiced pairing. Teams were split up into groups of 5, with each team having somebody from the front, middle and back of the line to ensure some balance. Being teams of five, one person from the team worked alone to give adversarial feedback. Pairs and singles swapped after each exercise. There were four pairing exercises that were allocated among the teams, but again being constrained by time each team only got to try two of them. The four themes for the pairing exercises were creative, analytic, visual / spatial / physical and people centric. I did the creative and analytic exercises. The creative exercise consisted of describing a painting to a blind person in at least 200 words. Working in a pair I found that more detail was picked up than I would have done individually. There was almost an analysis / brainstorming phase at first where we discussed various aspects of the painting. This was very effective in a pair. However, actually writing the prose was less so. For the analytic exercise I was working alone. Whilst I was able to direct what I did, I certainly missed some speed and accuracy that the pairs reported back. I found this session interesting, in that it informed me of the feeling when the payoff line for pairing was crossed. In general pairing is a great practice and this certainly helped encourage me to use it where possible and to know at what point to stop pairing.

Overall miniSPA was a well organised, enjoyable and interactive conference. I definitely recommend attending the real thing next year.

13 July 2008 Posted by | Events | , , , | Leave a comment

Refactoring Made Easier

JetBrains TeamCity is a build management and continuous integration platform which supports .NET and Java. Having set up TeamCity and played with it for a couple of weeks, I’m very impressed by the slick UI and features provided out of the box. These include a set of code quality features.   Even better is the fact that the professional version is free.

One of the things that really impressed me is the duplicates finder. As the name suggests it detects duplicate code and currently works with Java, C# [up to 2.0] and VB [up to 8.0].  This helps you target the areas that need refactoring.

Java duplicates in TeamCity

Alongside the duplicates [Java example above] a ‘cost’ is calculated. I’m not sure of the algorithm used, but it seems fairly sensible and the cost has some relation to the amount of code that is repeated. You can use this to help prioritise your refactoring. To setup your build simply set the runner to be the duplicates finder. Continue reading

6 July 2008 Posted by | .NET, Code Quality, Continuous Integration, Metrics | , , | Leave a comment

Upcoming Events

See the events page for a list of upcoming events, conferences and that sort of thing.

4 July 2008 Posted by | Events | | Leave a comment

Recently Released

The first beta of NHibernate Version 2 is now available. NHibernate is an ORM based on Java’s Hibernate.

Also out now is the 3rd preview of Microsofts ASP.NET MVC implementation.

3 July 2008 Posted by | .NET | | Leave a comment