Archive for August, 2013

TDD: Why Coding to Pass the Tests Makes Sense

August 31, 2013 Comments off

Lately I’ve been working on learning Test-Driven Development (TDD) and trying to use it for a project or two. I’m finding the TDD way of thinking a bit difficult to wrap my mind around. Much of it initially seems like the wrong way to do things. Eventually, though, I find that things just start falling into place and it becomes clear why TDD imposes such seemingly odd requirements.

For those who don’t know, Test-Driven Development is a methodology that aims to make unit testing easier by making developers produce more testable code. Its goal is to ensure that there is a test for every piece of functionality. One big advantage is that the resulting unit tests can double as a specification, so it aids documentation as well as testing. Since I’m still learning, though, you should probably not rely on me for an explanation. Instead, you might try the Wikipedia article on TDD or the tutorial in the SimpleTest documentation that I started with. (I know, PHPUnit is supposed to be better, but I haven’t had any luck setting it up yet.)

The part of TDD that bothered me was not the idea of writing the tests before the actual program code, but the insistence that the program code be written so that it just passes the tests. If you just have a test that checks for a return value, for instance, the program code isn’t supposed to do anything but return the expected value. This seemed silly to me: Why write code that passes the test, but doesn’t actually do anything? I felt like that defeated the whole purpose of testing by producing code that passed the tests but didn’t actually work. Sure, it makes sense for the tests to be implementation-agnostic, but the implementation should actually do something.

It finally clicked a couple weeks ago when I started a new project and did things the TDD way even when they didn’t make sense to me. I wrote a test, wrote some code that just passed it, and then went back and wrote another test that would require the code to change. Eureka! For one thing, it dawned on me that all I needed to do was keep writing more tests that required the program to do what it needed to do. More to the point, I noticed that making these tiny, incremental changes helped ensure that I would never introduce any functionality for which there wasn’t already a functioning test. And isn’t ensuring that the tests cover as much of the program code as possible what TDD is all about?

If you don’t mind my extrapolating a bit, I think I learned something else, too: If you’re going to learn a methodology, you have to accept it on its own terms. I don’t mean that programming requires blind faith; just that it’s important to keep an open mind rather than approaching a methodology with the stance that you’re going to keep the parts that fit your existing assumptions and reject the rest. In other words, learn to follow the rules before you break them.

Having had that insight, I’m eager to see what else I can learn from this experiment. If I come up with anything interesting, I’ll post it here.