So say that we agree that in an ideal world, we should automate
tests for our software. In reality, however, we know effort is required
for writing automated tests. So the question arises: At what point is
testing worth automating?
If we were to
assume that the cost of executing an automated test is more or less
negligible in comparison to the cost of manually executing it, we could
say that it is worth automating a test when the cost of automation is
less than the projected total cost of manual execution. This is
illustrated by the following graph, where the blue and red lines
respectively represent the cost of automatically and manually testing a
system in relation to the number of test cycles that will be executed:
http://www.javaworld.com/javaworld/jw-03-2005/images/jw-0307-testing.jpg">
Click on thumbnail to view full-sized image
The
point at which the two lines intersect represents the point at which
automated testing becomes worth the trouble. However, even if we do
know where these lines intersect, we won't know whether our particular
system warrants automated testing unless we are able to answer the
following questions:
- How many times will we manually execute our test cycle?
- How long will it take to write automated tests?
The problem is we can't know the answer to those questions in advance. However, we can at least use the past as a basis for
an estimate.
Let's reconsider the work I described earlier. We know how many times I manually executed that particular test cycle—now how
long would it take to automate it?
|