Close

Why do we write tests? 

We’ve all heard excuses for not writing tests - you don’t know how, it’s too complex to test, there’s no time, it’s not your job, the code is too simple to need testing…

Or maybe you love writing tests. Maybe you appreciate the many benefits that well-tested code provides. This is not a post to convince you to write tests. I’m going to assume you are already convinced of the many benefits of testing in principle:

  • Tests help document the application functionality: many developers prefer nice code examples and comments over pages of written explanation.
  • Tests help improve the design of an application: a testable application will more often adhere to proven design patterns, and approaches to organising and architecting the code base.
  • Tests help reduce the bugs in new features: bugs are not removed by tests, but they can dramatically decrease the number of bugs that are introduced as you produce new features.
  • Tests allow for refactoring: otherwise it’s just redevelopment.
  • Tests help reduce the number of issues that are accidentally introduced into old features: the development team and client can have a much higher level of confidence in moving fast and releasing often.
  • Tests can even reduce the associated cost of making changes to a code base: it’s faster to identify where changes will have an effect, what impact those changes will have, and reduces the likelihood of an extended, expensive back-and-forth between development team and client to confirm that the change was “successful”.

Assuming then that we’re all convinced the benefits of testing outweigh the negatives, the next question is: 

Why do we write the tests first?

The simplest answer is that if you don’t already know *exactly* how the implementation code will be written (and who really ever does?), by writing the test first you help to design and plan your implementation, by defining the expected behaviours of that system ahead of time.

You’re producing definitions of success that your application must meet; definitions that are based on a shared understanding between both the application owner and the development team. This helps drive down the number of defects in any resulting code.

Defects start a negative feedback loop. The more that exist, the more you end up introducing! The cost of making a change to the system becomes vague and difficult to forecast. 

In fact, the more defects that exist in the system, the more the cost of making changes exponentially increases. Breaking a negative feedback loop is hard.

Ultimately our goal is to deliver new features quickly and predictably, at a very high standard of quality. In order to do so, we need to be able to reasonably predict the cost of making changes to a system, plan for changes with confidence, and be productive. Taking a test-driven approach is important because it allows us to break the negative feedback loop introduced by defects in any system, and maintain that predictable and consistent cost of making changes.

What does test driven development look like to us?

  1. Write a test that defines an expected behaviour of the system
  2. Write only enough code to let the test run
  3. Run the test (watch it fail)
  4. Modify the code just enough to have the test pass
  5. Run the test (watch it pass)
  6. Refactor, and repeat 3 - 6 until satisfied with the quality of the feature
  7. Repeat 1 - 6 for each expected behaviour of the system

Ok, we’re writing tests first, and we’re reaping the benefits. Everything is perfect because everyone is following steps 1 to 7 every time. Right?

Who skips step 3? Why?

If it’s because tests take too long to run so you want to reduce how often tests get run, then that’s a genuine issue that needs solving before you continue feature development. Tests should be lightweight and run as needed. 

If it’s because you just know it’s going to fail, so there’s absolutely no point in running it anyway, stop. Are you even sure it can fail? It’s perfectly possible to accidentally write tests that can never fail, under any circumstances. 

Run the test first and if it’s slow, solve the performance issue; if it passes, use that information to improve the overall application. Maybe it was a badly written test which we’ve now caught early and fixed before it gives false confidence in the application code, or perhaps the test proves that actually our system already contains this feature and we were about to waste time producing it again!

Watching it fail first provides proof that the test had value as a unique, meaningful check on the system. The feature we’re about to produce to make that test pass is useful, and actually needed.

What’s more if we run the entire suite of tests each time, we would only expect the one new test to fail. If more fail, we’ve once again received important time-saving information earlier in the process, before that time could be wasted.

So back to the original question: 

Why is it important to fail first?

Because it proves that when you succeed soon after, you’ve got confidence in your system, and more importantly in the safety net of tests around that system. With that confidence in place we can forecast changes with less fear, we can concentrate our efforts on improving the quality and feature set of the system, we can continuously deploy and deliver with certainty and, best of all, we can stop firefighting.

To find out more about how we work at Box UK visit the Our Development Process section of our site, and if there's a specific development project you'd like to discuss, get in touch with a member of our team today.

About the author

Tom Houdmont

Tom Houdmont

Tom is a Solution Architect at Box UK. He likes learning about exciting new technologies, solving difficult problems, and promoting teamwork and transparency on his projects.

Related content

Tech round-up: Jan 27th

By Ian Jenkins

Tech round-up: Oct 9th

By Ian Jenkins

We're hiring. Let's talk. View available roles