When setting up a project with more code than fits one a piece of paper I like to have some code metrics around.
The purpose of that metrics is two fold:
- Identify areas in the code base that need improvement.´
- Motivate team members to improve those areas.
But what metrics should one use? I tried to use Sonar but found it to be of very limited use. The problem is that it produces just to many metrics. You can spent hours browsing, slicing and dicing these metrics without really learning anything from it. Once again: Less is more.
I also used toxicity and I still like it. But I also learned from the book Making Software: What Really Works, and Why We Believe It that all these fancy metrics are really pretty much equivalent to lines of code when it comes to predicting faults in your software.
So here is my first code metric I recommend: Size. I don’t think it matters to much what exact measure you use, but you should have a measure of size for the various artifacts in your project. For a typical java project you might end up with:
- Number of classes per package
- Lines of Code per class
- Lines of Code per method
The second metric I’d like to see in my projects is Test Coverage. I don’t expect test coverage to be a useful predictor of faults in the software, but writing unit test teaches you a lot about modular design and if you know about modular design and have to write tests for your code I expect this combination to result in the application of modular design. That’s why I have an eye on test coverage. Oh and good tests might actually help in maintenance. There are tons of different kinds of test coverage metrics. I’d choose the one that is most difficult to get to 100% and is provided by your tooling. I’d expect that to be either path or branch coverage.
In the book Making Software: What Really Works, and Why We Believe It there is a chapter in which metrics are analysed for their ability to predict faults in software. One of the metrics that actually works is the amount of change: If you change a file a lot it is more likely to contain bugs. Therefore I’d like to have The Number Of Commits During the last N days as a code metric. Unfortunately I have no easy way to make that available to a project team, but I assume for now this is just because I haven’t looked yet.
I think with these three metrics you should be all set for the first purpose I mentioned above.
But what about the motivational part. Is a test coverage of 85% motivating? I don’t think so. What I found motivating in the past is the visualization of metrics over time. Measure the three metrics mentioned above for every day and make a graph out of it. Display it on the start page of your CI build. I find this part especially nice for systems that aren’t in a very healthy state as far as code quality is concerned. It is very frustrating to look at the toxicity of your project and thinK: Oh man we have to get rid of 5739 points of toxicity. But when you can change that in: “Cool while adding this feature I refactored this class and now the toxicity is ten points lower then a week ago it becomes actually motivating. At least for number geeks like me.