Until recently I was on an extreme programming team at the Humana DEC. Every workday we practiced test driven development (TDD). After 100 days, I want to point out some differences between TDD in theory and TDD in practice.
So with respect to the Three Laws of TDD here are my caveats:
- You don't need to test everything
- You can write more than one failure at a time
- You don't need to practice TDD at the nano cycle
- You should delay design decisions until the blue phase
- You should refactor tests too
Before you punch your screen allow me to elaborate.
You don't need to test everything
In theory, and according to the first law of TDD:
You can't write any code until you have first written a failing test.
In practice, I rarely write tests for content, design, configuration, etc. I write tests for any code that contains logic.
You can write more than one failure at a time
In theory, and according to the second law of TDD:
You can't write more of a test than is sufficient to fail.
In practice, I often write a few failures at a time. However, these are typically within the same test and always at the same level. That is a few unit tests failures or a few integration tests failures. Then I make them pass one by one.
You don't need to practice TDD at the nano cycle
In theory, and according to the third law of TDD:
You can't write more code than is sufficient to pass the currently failing test.
In practice, I follow TDD Law #2 and #3 when working with a new codebase or new technology. Once I am familiar, I write the failing test and code to pass in one cycle. I see no need to repeat the red-green cycle at the minimal pace [1].
You should delay design decisions until the blue phase
In theory, as noted in the third law of TDD, the green phase is about writing minimal code to make the test pass.
In practice, many people refactor during the green phase (or earlier). This is too early. To avoid refactoring during the green phase I call YAGNI on nearly everything. Delay design decisions until the blue phase. By then you'll have a better understanding of the code and tests to guide your refactor.
You should refactor tests too
In theory, all code should be refactored.
In practice, tests are rarely refactored. Tests are code too and should be refactored during the blue phase. Futhermore, when practicing TDD, tests serve as documentation. It is therefore equally, if not more important that you ensure the test code communicates clearly.
[1] While writing this post, I found a post by Uncle Bob in which he discusses the different TDD cycles. Much of the theory above operates on the nano cycle. What I have described in practice combines mostly the minute and later cycles.
Want more? Follow @gonedark on Twitter to get weekly coding tips, resourceful retweets, and other randomness.
Great post!
Usually, when I am writing new code I try to define an interface before I write tests for it. I find that the tests start to fall out of it logically after the interface has been well-defined. I usually start with the question, "How do I want to invoke this task?" My metric is usually asking if a brand new developer with no knowledge of the project would be able understand what is happening from the interface alone. Basically, ensuring clean code.
This is something I see devs doing commonly. Especially when they try to make the problem fit into a design pattern initially, rather than refactoring to the pattern. Don't start with any design. Refactor to the design when you have enough information to make valuable decisions about the design, given the domain and the context.
I write code as clean as possible to make it self-documenting. I views tests in the same way. How do you use
Object
? Take a look atTestObject
. It's that simple, so long as you keep your tests refactored. And you are right, they are also code. And if we use tests like documentation, then we need to treat them as such. The only thing worse than no documentation is documentation that's wrong. In the same way, a bad test is worse than no test at all. And an outdated test is a bad test.