small medium large xlarge

Generic-user-small
26 Sep 2015, 14:41
Stefan Seegers (6 posts)

If we strive for 100% code coverage, there are still scenarios where problems can occurr, e.g.


public double computeVelocity(double distanceInMeters, double timeInSeconds){ return distanceInMeters / timeInSeconds;} —

If I test with input values 6 and 3 and expect the result to be 2, everything works fine and we have 100% code coverage. But if I pass 0 into timeInSeconds, I’ll receive a DivisionByZeroError. The same problem applies for other RuntimeExceptions like NullPointerException.

If those problems are handled elsewhere, we’re fine. If not, we’re probably in trouble.

So far, testing for behaviour including edge cases is surely the right thing. Code Reviews and explorative testing are also helpful.

But trusting your code just because if 100% code coverage could be a false sense of security.

Any recommendations for these cases besides knowing that 100% might not be enough?

David1-med_pragsmall
27 Sep 2015, 17:55
David Scott Bernstein (4 posts)

Hi Stefan,

Great question!

You are very right. 100% code coverage does not mean 100% bug free. Code coverage is, in my opinion, a poor metric because it’s too easy to game and, as you point out, gives a false sense of security.

Remember, TDD is not about replacing QA and when we try to use TDD as a form a QA we can end up doing TDD poorly. The tests we create doing TDD help developers build out testable behaviors. The tests that we create doing QA helps testers verify certain bugs don’t exist. These are different activities and when we try to combine them we get into trouble, just like we end up with writer’s block when we try to write prose and edit ourselves at the same time.

My book shows how developers can use TDD to drive the development of testable behaviors and get some (but not all) regression. I still advocate creating additional tests as part of a QA effort because that’s needed in many situations.

However, as a developer I’m responsible for handling runtime exceptions in my code, so after writing a test for the happy path for computeVelocity(…) I’d write another test to drive the creation of an IllegalArgumentException if timeInseconds is less than or equal to zero. BTW, I generally prefer to check this kind of arguments right before they’re used so the checks are not repeated throughout the code.

What I find more troubling is I see people write unit tests that call too much code and so they end up covering the same code over and over, which is both unnecessary and slows down the build. Testing behaviors and not implementations, testing only what’s needed and not too much, and writing testable code in the first place aren’t yet widely understood practices in our industry. I try to give guidelines for these things in my book.

In summary, I recommend what I say in my book, which is not to think of TDD as a form of QA but rather as a way of specifying and building out testable behaviors. This can help QA but it shouldn’t replace QA.

David.

Generic-user-small
29 Sep 2015, 07:06
Stefan Seegers (6 posts)

Hi David,

sounds we’re on the same page, thanks for clarification.

Stefan

You must be logged in to comment