You are very right. 100% code coverage does not mean 100% bug free. Code coverage is, in my opinion, a poor metric because it’s too easy to game and, as you point out, gives a false sense of security.
Remember, TDD is not about replacing QA and when we try to use TDD as a form a QA we can end up doing TDD poorly. The tests we create doing TDD help developers build out testable behaviors. The tests that we create doing QA helps testers verify certain bugs don’t exist. These are different activities and when we try to combine them we get into trouble, just like we end up with writer’s block when we try to write prose and edit ourselves at the same time.
My book shows how developers can use TDD to drive the development of testable behaviors and get some (but not all) regression. I still advocate creating additional tests as part of a QA effort because that’s needed in many situations.
However, as a developer I’m responsible for handling runtime exceptions in my code, so after writing a test for the happy path for computeVelocity(…) I’d write another test to drive the creation of an IllegalArgumentException if timeInseconds is less than or equal to zero. BTW, I generally prefer to check this kind of arguments right before they’re used so the checks are not repeated throughout the code.
What I find more troubling is I see people write unit tests that call too much code and so they end up covering the same code over and over, which is both unnecessary and slows down the build. Testing behaviors and not implementations, testing only what’s needed and not too much, and writing testable code in the first place aren’t yet widely understood practices in our industry. I try to give guidelines for these things in my book.
In summary, I recommend what I say in my book, which is not to think of TDD as a form of QA but rather as a way of specifying and building out testable behaviors. This can help QA but it shouldn’t replace QA.