ywen.in.coding

Everythng I do

100% Test Coverage Is the Starting Point

| Comments

Introduction

I am surprised when I read this post because I am one of Martin Fowler fans and still is. But I have to disagree large part of Martin’s view on this issue.

First, the part that I agree with him is that test coverage should not be a goal, and a high number doesn’t mean much. But from this fact, I have my own conclusions.

What is ideal test coverage number

In short, 100% test coverage number is not a goal, it is a requirement, a starting point for doing good tests. Below 100% is unacceptable.

Why? I will use some ruby code as my example

1
return true if condition?

with one test in rspec:

1
2
3
4
5
6
context "when the condition is true" do
  it "returns true" do
    subject.stub(:condition?).and_return true
    subject.method.should be_true
  end
end

When you run the test with coverage, the line has a 100% coverage number. But do you really test the method well? the answer is no because you also need to test when condition is false. So the “real” coverage here is actually like 50% roughly.

Now think about it, when the test coverage tool tells you that you are 100% covered, it is probably lying, you don’t. What about Martin’s claims of “high coverage” like 90%? how poor that coverage is if 100% cverage is not that convicing?

Test every method?

Do I absoutely test every each single method? The answer is no. Here is an example:

I love constructor injection pattern, so a lot of my code looks like this:

1
2
3
4
5
6
7
Class A
  attr_reader :dep1, :dep2
  private :dep1, :dep2
  def initialize(dep1, dep2)
    @dep1, @dep2 = dep1, dep2
  end
end

How would I test this code in seperation? Basically I can’t, and I don’t. So if I write this code and commit, my test coverage will go below 100%. Is that OK? No, absolutely not OK. Then what do I do wrong?

I have to step back and ask myself, why do I write this code? Do I really need it? The answer most likely is: yes I do. However, it is not useful on its own, I will need my dependencies in some of instance methds in that class. But I don’t need the dependencies until I actually work on the instance methods.

So what I do wrong is the sequence of my actions. I should have had written one test that will present an intent, or a uasge of an instance of this class, and I start to impelment the method that does it. When I implement the method, I may find out that I have to add such a constructor (or not, for what it worth). At that time, I add this constructor and it is covered by an execution of the test against the instance method. My coverage number is still 100%.

This example essentially is the principle of “Do not test private methods”.

validates_present_of

I would like to be off topic a little bit to discuss a view that some people hold: do not test your validations in Rails since you are tesing the framework.

Well, the truth is, you are not testing framework. When you add one line into your class like

1
validates_present_of :name

You add a business rule in it. The framework does not have this rule, it doesn’t know the :name is required in this model class. So you should be testing this line to make sure name is being required.

Think of an exetreme situation: Given that you don’t test this line, some developer accidently deletes the line. No tests would fail and the fault code goes to production. Then next thing you know, you eitehr raises a bunch of 500 error when user doesn’t fill in the name, or worse, some users register successfully without a name. You then may not be able to charge his card because no name associates with it. Would you still think this line is framework concern at that point?

If you write you test like this:

1
2
3
4
5
it "is required" do
  subject.name = nil
  subject.valid?
  subject.errors[:name].should include("is required")
end

Then you are testing the framework because the error message is generated by the framework, not by your code. So something like:

1
subject.should have(1).error_on(:name)

is good.

Some would argue it is so painful to write like 5 lines of code for each attribute being validated. Very true, but how about this in your test:

1
it_requires_attributes :name, :email

This looks easy, right? All you have to do is to add a macro behind the scene to exapand into some good tests. I am not advocating this macro itself, I am just saying there are a lot of ways to make your tests DRY.

In team environments

It is very difficult to configure a CI environment with less than 100% test coverage. The problem is this: Say your manager demands a 86% test coverage. Then one could just wait for another teammate commits and increase the test coverage to 86.1%, then he commits without test, drop the test coverage back to 86%. Mission accomplished.

So the only way that makes sense is to ensure to increase or maintain the test coverage with every commits (or every merge back to the main line if you use git). To do so, a lot of job must be done on the CI servers side to be able to:

  • Know the current thershold
  • Increase the thershold automatically when the test coverage goes up
  • Break the build when the test coverage is below the thershold

With a 100% code coverage, this is so much simpler, each commit either has 100% coverage, or it doesn’t. The CI servers just need to fail those who don’t.

Summary

I understand Martin’s position completely. He is addressing the problem where the test coverage number becomes only a goal, a symbol, not the real effort. But it doesn’t mean a CI server should not measure the test coverage. In my own practice, when we have this number and when we realize how bad it is, we tend to do a better job, pay more attention to the code we write. It is a very useful tool for a team in learning.

When a team becomes better, the build will not break because of the test coverage, then it doesn’t really matter if the CI server keeps measureing it. It will give people confidence, validate what they have been doing. Thus a good ending.

Comments