When you are faced with a team that's new to agile development, it's common to spend a lot of time driving home the need to test everything. Along with refactoring, test-driven (or at least test-conscious) development is the most basic skill the aspiring extreme programmer must acquire. We strive to testability in design and the cheapest, most effective quality we can get. So is there such a thing as bad testing? Can tests hinder more than help? Can tests just be a downright pain in the rear?
The answer is a resounding YES! YES! YES!
Just because you code in an object-oriented language does not guarantee that what you write will contain any object-oriented characteristics whatsoever. Similarly, just because you write JUnit test cases does not guarantee that what you produce will display any of the qualities of an effective and adaptable test suite. Naive application of testing frameworks has the same potential for developers to exercise their bad judgement and bad taste as any other technology. Now while I'd certainly rather have a project with lots of less than ideal tests than no tests at all, we need to learn to 'smell the tests' to really get the most out of our (and our customers') investment in testing.
So here are some of the smells that tests can emit. I'm sure you can think of more, so please drop me a line so I can share your pain...:P
- No statement of intent. If I have a Widget with a getPrice() method, I often see a testGetPrice() test method, and my first thought is "test getPrice() does what?". Prefer names for tests that express the intention of the test, such as testGetPriceReturnsZeroWhenNotInitialisedExplicitly(). There are tools that can turn your test names into human readable documentation, so don't be afraid of full-length sentences as test method names.
- Irrelevant assertions. It is common (particularly in functional tests) for a test that is concerned with proving a small incremental behaviour to make irrelevant assertions about all sorts of other fields, even on other screens. Prefer semantic generalisations over value-specific assertions. For example, if your test is not concerned with the actual price algorithm, don't assertEquals("46.12", widget.getPrice()) when you can say assertNonZeroMonetaryValue(widget.getPrice()) - even though it's a bit more work at first.
- Setup leakage. It is extremely common for developers new to testing to not easily be able to distinguish the concerns of their test from the setup steps required to get them to the point where their test can begin. The newly test-infected developer can thus create maintenance burdens and simply waste time reinventing the testing wheel with every test case. Strive for a clear separation between setup and test, and look for reuse in the setup code via fixtures and scenarios.
- Overtesting. Many developers find it difficult to focus only on what is relevant to the test at hand. It is not only more efficient to assume the rest of the system works when writing a new test, but it also improves maintainability as each test is more isolated and targeted. It is common for developers to turn themselves into knots to get controlled input and output moving between different components when a mock would be faster and more effective. Leave it to the other tests to do their job and assume away responsibilities that are not yours right now for the code under test.
- Multi-layer tests. Unit tests should rearely (if ever) traverse multiple layers in your design. A sure sign that they do is when they run slowly. Unit tests should run in the thousands per minute kind of rate, while functional tests tend to be in the 30-50 per minute range, so prefer unit tests to achieve coverage and functional tests to prove wiring of configured components.
- In-container unit tests. T'ain't no such thang. Don't make sense. If you think you have some, that's a smell. :)
- Fractured functional tests. Adding functionality a small bit at a time leads to functional tests with a lot of overlap and there is no force to encourage the consolidation of these tests other than slow build speed. I'm not sure how to fix this one other than slow slog refactoring. We have a lot of this joy ahead on our current project!
None of this alters my belief that test-driven development is the best method I know for building quality software. But it does mean that (once again) we need to be careful about silver bullet syndrome when it comes to testing.
 
You should probably have a look at Ian's collection of test anti-patterns at: http://www.ianbourke.com/servlet/space/Test+Anti-patterns
ReplyDeleteand this set of test-smells: http://tap.testautomationpatterns.com:8080/Test%20Smells.html that are part of the test automation patterns web site
Once you've been test infected, it can turn sour on you and really begin to smell bad. Sounds like Gangrene:
ReplyDelete"necrosis or death of software due to obstructed thinking, usually followed by decomposition and putrefaction."