Why do we write automated tests?

Intro

Automated software testing is important. It is what gives us the confidence to write code, to change code and to understand code. Thinking about all three of these factors for every test will lead to better tests and better code.

Writing code

The most obvious reason to write automated tests is to validate behavior during development. From mundane logic to complex algorithms, a fast-running automated test focused on a specific behavior can prevent the simple mistakes that take a long time to debug. I practice Test Driven Development (TDD) when developing new code. At a high level, TDD focuses on writing the test before the production code, thus guaranteeing a high level of test coverage and low bug rate. When done properly, TDD will drastically reduce debug time and improve confidence in estimation and delivery.

Changing code

One of the biggest problems with legacy codebases that lack automated tests is that change incurs a high level of risk. There is no way to know how a given code change might unexpectedly change the behavior of the system. Worse yet, there is no way to know when you have adequately validated a change. Having an automated test suite mitigates this risk by providing a safety net against unexpected change. This is especially important when refactoring as it guarantees that while changing implementation the behavior remains the same.

Understanding code

The third and least obvious reason for testing is documentation of intent. The tests serve as a living document to other developers letting them know what behavior you expect the code to have. Using tests as documentation is better than a design document or code comment for two reasons. First, it is much less tedious to write because code is more expressive to developers than prose. Second, since it is compliable and executable it must be kept up to date or the test suite fails. Traditional documentation, when written at all, is rarely maintained and often out of date. It also usually documents what the code does but not not why it does it. Tests as documentation do both.

One critical practice to tests serving as documentation is naming. Tests should be named for exactly what they do, not how they do it. For example: shouldReturnCorrectSkyColorFromGetSkyColor() is a bad test name for a few reasons. First, it says what the code does, not what behavior it should exhibit. Avoid words like “return” or “call” or other programming jargon. Second it is too generic. What is the “correct sky color”? How does the reader know it is correct? Finally, it does not describe the behavioral precondition at all. Tests should be about setting up a situation, invoking part of the system, and verifying a result. A much better test name would be shouldIndicateSkyIsBlueWhenNoCloudsPresent. Given this test, I would probably expect to see shouldIndicateSkyIsGreenWhenTornadoImminent and shouldIndicateSkyIsGrayWhenCumulusCloudsPresent. These tests clearly specify what the preconditions are. They also are specific as to the result value for those preconditions. They do not mention code jargon or method names. It may be that the color is indicated by a return value or by setting a member variable or calling a callback method. The content of the test will tell you how it is done.

Conclusion

When writing tests, keep in mind that they are more than just making sure you get the logic right. Tests are your safety net for future change and documentation for you and other developers on your team. Take the time to test correctly and you will save debug and bugfix time in your current iteration and those to follow.

Published: January 19 2013

blog comments powered by Disqus