Teams that adopt agile practices often adopt Test Driven Development
(TDD), which means, of course, that they end up writing a lot of tests.
In general, thats great but there is a failure case for teams that
attempt to get test infected; you can end up writing very slow tests
that take so long to run that they essentially start to feel like
baggage even though they help you catch errors.
This issue with unit tests isnt a new issue, its been around for a
while, and there are a couple of ways of handling it. In Extreme
Programming, the typical way of handling it is to periodically go back
and optimize your tests as they get too slow. In many cases this works
well, but the amount of optimization that you have to do can be rather
large if you havent been conscious of how long your tests run during
development. In one case that stands out in my memory, I visited a team
on the east coast about four years ago that wrote oodles of tests
against their EJB environment. The tests hit a server and went through
session beans, entity beans, down to the bowels of the database and
then up again. Their refrain? We dont like writing unit tests any
more; they take too long to run. I didnt blame them for feeling that
way, but I also didnt agree that they had written any unit tests.
The problem is rather common. Ive spoken to other XPers about it
over the years and I sort of figured that the way that I handled it was
common, but I was surprised to discover (on the XP yahoo group this
week) that it was also a bit contentious. Heres what I typically say
when I run into teams that have this problem.
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can't run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing
config files) to run it.
Tests that do these things aren't bad. Often they are worth
writing, and they can be written in a unit test harness. However, it is
important to be able to separate them from true unit tests so that we
can keep a set of tests that we can run fast whenever we make our
changes.
That might sound a little severe, but it is medicine for a common
problem. Generally, unit tests are supposed to be small, they test a
method or the interaction of a couple of methods. When you pull the
database, sockets, or file system access into your unit tests, they
arent really about those methods any more; they are about the
integration of your code with that other software. If you write code in
a way which separates your logic from OS and vendor services, you not
only get faster unit tests, you get a binary chop that allows you to
discover whether the problem is in your logic or in the things are you
interfacing with. If all the unit tests pass but the other tests (the
ones not using mocks) dont, you are far closer to isolating the
problem.
Frankly, we need both kinds of tests, but these "pure" unit tests are undervalued.
|