Code coverage is (almost) useless

Code coverage is just a number that can be easily manipulated. Managers love to quantify things, so it doesn't surprise me that some people care a lot about this, tricking themselves into self-complacency and false-safety when the number is high enough.

Code coverage indicates how much of your code has been tested (which is useful to a certain point), but it doesn't talk about the quality of these tests. Are they testing the right thing? Are they covering all possible cases? Are they easy to read and maintain? Are they well written? None of these questions can be answered by code coverage.

More often than not, achieving 100% coverage means testing what we don't need to test. At this point, a higher code coverage is in fact detrimental; it means we are focusing our time in the wrong problem. More tests is worse than enough tests.

From Martin Fowler, talking about code coverage (he calls it "test coverage"):

If you make a certain level of coverage a target, people will try to attain it. The trouble is that high coverage numbers are too easy to reach with low quality testing. (At the most absurd level you have AssertionFreeTesting.) But even without that you get lots of tests looking for things that rarely go wrong distracting you from testing the things that really matter.

Human beings know how to quickly adapt to survive. If the only thing you care about is as cold as a number, people will find a way to reach that number. No matter what.

Have something to say about this post? Get in touch!

Want to read more? Visit the archive.