Even if nobody changes your software, the environment that it lives within can still change. Most software doesn't live in
isolation; thus, it cannot dictate the pace of change.
Virtual
machines are upgraded. Database drivers are upgraded. Databases are
upgraded. Application servers are upgraded. Operating systems are
upgraded. These changes are inevitable—in fact, some argue that, as a
best practice, administrators should proactively ensure that their
databases, operating systems, and application servers are up-to-date,
especially with the latest patches and fixes.
Then there are the
changes within your organization's proprietary software. For example,
an enterprise datasource developed by another division in your
organization is upgraded—and you are entirely dependent upon it.
Alternately, suppose your software is deployed to an application server
that is also hosting some other in-house application. Suddenly, for the
other application to work, it becomes critical that the application
server is upgraded to the latest version. Your application is going
along for the ride whether it wants to or not.
Change is constant, inevitable, and entails risk. To mitigate the risk, you test—but as we've seen, manual testing quickly
becomes impractical. I believe that more automated testing is the way around this problem.
Some argue that management should be responsible for coordinating
changes; they should track dependencies and ensure that if one of your
dependencies changes, you will retest. Cross-system changes will be
synchronized with releases. However, in my experience, these
dependencies are complex and rarely tracked successfully. I propose an
alternate approach—that software systems are better able to both test
themselves and cope with inevitable change. As I see it, organizations that do not cope with this change often lean
in one of two directions: those who reduce their testing to maintain
the pace, and those who reduce the pace to maintain their testing. Each
of these approaches has its problems.
Organizations that reduce their testing to maintain the pace tend to
say: "Manual testing takes too long, and automated testing is too hard,
so we just won't test as much." Consequently, they suffer from all of
the problems that result when you reduce testing. However, as I
mentioned in my introduction, this article doesn't argue why we should
test, so I won't discuss the subject further.
Organizations that reduce pace to maintain testing tend to say:
"Testing is important, but writing automated testing is too hard, so
we'll manually test." This is better than no testing, but I do not
believe that on large systems in an enterprise environment, manual
testing can cope with the necessary pace of change. Reduction in pace
is a barrier to the system's advancement—the software's architecture
slowly but steadily degrades. For example: application servers are not
upgraded, and new projects are forced to use old platforms because it
is not practical to manually retest everything already deployed on that
platform.
|