Introduction:
The case for automating the Software Testing Process has been made repeatedly and
convincingly by numerous testing professionals. Most people involved in the testing of
software will agree that the automation of the testing process is not only desirable, but
in fact is a necessity given the demands of the current market.
A number of Automated Test Tools have been developed for GUI-based applications as well
as Mainframe applications, and several of these are quite good inasmuch as they provide
the user with the basic tools required to automate their testing process. Increasingly,
however, we have seen companies purchase these tools, only to realize that implementing a
cost-effective automated testing solution is far more difficult than it appears. We often
hear something like "It looked so easy when the tool vendor (salesperson) did it, but
my people couldn’t get it to work.", or, "We spent 6 months trying to
implement this tool effectively, but we still have to do most of our testing
manually.", or, "It takes too long to get everything working properly. It takes
less time just to manually test.". The end result, all too often, is that the tool
ends up on the shelf as just another "purchasing mistake".
The purpose of this document is to provide the reader with a clear understanding of
what is actually required to successfully implement cost-effective automated testing.
Rather than engage in a theoretical dissertation on this subject, I have endeavored to be
as straightforward and brutally honest as possible in discussing the issues, problems,
necessities, and requirements involved in this enterprise.
What is "Automated Testing"?
Simply put, what is meant by "Automated Testing" is automating the manual
testing process currently in use. This requires that a formalized "manual testing
process" currently exists in your company or organization. Minimally, such a process
includes:
- Detailed test cases, including predictable "expected results", which have been
developed from Business Functional Specifications and Design documentation
- A standalone Test Environment, including a Test Database that is restorable to a known
constant, such that the test cases are able to be repeated each time there are
modifications made to the application
If your current testing process does not include the above points, you are never going
to be able to make any effective use of an automated test tool.
So if your "testing methodology" just involves turning the software release
over to a "testing group" comprised of "users" or "subject matter
experts" who bang on their keyboards in some ad hoc fashion or another, then you
should not concern yourself with testing automation. There is no real point in trying to
automate something that does not exist. You must first establish an
effective testing process.
The real use and purpose of automated test tools is to automate regression testing.
This means that you must have or must develop a database of detailed test cases
that are repeatable, and this suite of tests is run every time there is a change to
the application to ensure that the change does not produce unintended consequences.
An "automated test script" is a program. Automated script development,
to be effective, must be subject to the same rules and standards that are applied to
software development. Making effective use of any automated test tool requires at least
one trained, technical person – in other words, a programmer.
Cost-Effective Automated Testing
Automated testing is expensive (contrary to what test tool vendors would have
you believe). It does not replace the need for manual testing or enable you to
"down-size" your testing department. Automated testing is an addition to
your testing process. According to Cem Kaner, in his paper entitled "Improving the
Maintainability of Automated Test Suites"
it can take between 3 to 10 times as long (or longer) to develop, verify, and document an
automated test case than to create and execute a manual test case. This is especially true
if you elect to use the "record/playback" feature (contained in most test tools)
as your primary automated testing methodology. Record/Playback is the least
cost-effective method of automating test cases.
Automated testing can be made to be cost-effective, however, if some common sense is
applied to the process:
- Choose a test tool that best fits the testing requirements of your organization or
company. An "Automated Testing Handbook" is available from the Software Testing
Institute , which covers all
of the major considerations involved in choosing the right test tool for your purposes.
- Realize that it doesn’t make sense to automate some tests. Overly complex tests are
often more trouble than they are worth to automate. Concentrate on automating the majority
of your tests, which are probably fairly straightforward. Leave the overly complex tests
for manual testing.
- Only automate tests that are going to be repeated. One-time tests are not worth
automating.
- Avoid using "Record/Playback" as a method of automating testing. This method
is fraught with problems, and is the most costly (time consuming) of all methods over the
long term. The record/playback feature of the test tool is useful for determining how the
tool is trying to process or interact with the application under test, and can give you
some ideas about how to develop your test scripts, but beyond that, its usefulness ends
quickly.
- Adopt a data-driven automated testing methodology. This allows you to develop
automated test scripts that are more "generic", requiring only that the input
and expected results be updated. There are 2 data-driven methodologies that are useful. I
will discuss both of these in detail in this paper.
The Record/Playback Myth
Every automated test tool vendor will tell you that their tool is "easy to
use" and that your non-technical user-type testers can easily automate all of their
tests by simply recording their actions, and then playing back the recorded scripts. This
one statement alone is probably the most responsible for the majority of automated test
tool software that is gathering dust on shelves in companies around the world. I would
just love to see one of these salespeople try it themselves in a real-world scenario.
Here’s why it doesn’t work:
- The scripts resulting from this method contain hard-coded values which must
change if anything at all changes in the application.
- The costs associated with maintaining such scripts are astronomical, and unacceptable.
- These scripts are not reliable, even if the application has not changed, and often fail
on replay (pop-up windows, messages, and other things can happen that did not happen when
the test was recorded).
- If the tester makes an error entering data, etc., the test must be re-recorded.
- If the application changes, the test must be re-recorded.
- All that is being tested are things that already work. Areas that have errors are
encountered in the recording process (which is manual testing, after all). These bugs are
reported, but a script cannot be recorded until the software is corrected. So what are you
testing?
After about 2 to 3 months of this nonsense, the tool gets put on the shelf or buried in
a desk drawer, and the testers get back to manual testing. The tool vendor couldn’t
care less – they are in the business of selling test tools, not testing software.
Viable Automated Testing Methodologies
Now that we’ve eliminated Record/Playback as a reasonable long-term automated
testing strategy, let’s discuss some methodologies that I (as well as others) have
found to be effective for automating functional or system testing for most business
applications
The "Functional Decomposition"
Method
The main concept behind the "Functional
Decomposition" script development methodology is to reduce all test cases to their
most fundamental tasks, and write User-Defined Functions, Business Function Scripts,
and "Sub-routine" or "Utility" Scripts which perform
these tasks independently of one another. In general, these fundamental areas
include:
- Navigation (e.g. "Access Payment Screen from Main Menu")
- Specific (Business) Function (e.g. "Post a Payment")
- Data Verification (e.g. "Verify Payment Updates Current Balance")
- Return Navigation (e.g. "Return to Main Menu")
In order to accomplish this, it is necessary to separate Data from Function.
This allows an automated test script to be written for a Business Function, using data-files
to provide the both the input and the expected-results verification. A hierarchical
architecture is employed, using a structured or modular design.
The highest level is the Driver script, which is the engine of the test. The Driver
Script contains a series of calls to one or more "Test Case" scripts. The
"Test Case" scripts contain the test case logic, calling the Business Function
scripts necessary to perform the application testing. Utility scripts and functions are
called as needed by Drivers, Main, and Business Function scripts.
- Driver Scripts:
Perform initialization (if required), then call the Test Case Scripts in the
desired order.
- Test Case Scripts:
Perform the application test case logic using Business Function Scripts
- Business Function Scripts:
Perform specific Business Functions within the application;
- Subroutine Scripts:
Perform application specific tasks required by two or more Business scripts;
- User-Defined Functions:
General, Application-Specific, and Screen-Access Functions;
Note that Functions can be called from any of the above script types.
Example:
The following steps could constitute a "Post a Payment" Test Case: - Access Payment Screen from Main Menu
- Post a Payment
- Verify Payment Updates Current Balance
- Return to Main Menu
- Access Account Summary Screen from Main Menu
- Verify Account Summary Updates
- Access Transaction History Screen from Account Summary
- Verify Transaction History Updates
- Return to Main Menu
A "Business Function" script and a "Subroutine" script could be
written as follows:
Payment: - Start at Main Menu
- Invoke a "Screen Navigation Function" to access the Payment Screen
- Read a data file containing specific data to enter for this test, and input the data
- Press the button or function-key required to Post the payment
- Read a data file containing specific expected results data
- Compare this data to the data which is currently displayed (actual results)
- Write any discrepancies to an Error Report
- Press button or key required to return to Main Menu or, if required, invoke a
"Screen Navigation Function" to do this.
Ver-Acct (Verify Account Summary & Transaction History): - Start at Main Menu
- Invoke a "Screen Navigation Function" to access the Account Summary
- Read a data file containing specific expected results data
- Compare this data to the data which is currently displayed (actual results)
- Write any discrepancies to an Error Report
- Press button or key required to access Transaction History
- Read a data file containing specific expected results data
- Compare this data to the data which is currently displayed (actual results)
- Write any discrepancies to an Error Report
- Press button or key to return to Main Menu or, invoke a "Screen Navigation
Function"
The "Business Function" and "Subroutine" scripts invoke "User
Defined Functions" to perform navigation. The "Test Case" script would call
these two scripts, and the Driver Script would call this "Test Case" script some
number of times required to perform all the required Test Cases of this kind. In each
case, the only thing that changes are the data contained in the files that are read
and processed by the "Business Function" and "Subroutine" scripts.
Using this method, if we needed to process 50 different kinds of payments in order to
verify all of the possible conditions, then we would need only 4 scripts which are re-usable
for all 50 cases: - The "Driver" script
- The "Test Case" (Post a Payment & Verify Results) script
- The "Payment" Business Function script
- The "Verify Account Summary & Transaction History" Subroutine script
If we were using Record/Playback, we would now have 50 scripts, each containing
hard-coded data, that would have to be maintained.
This method, however, requires only that we add the data-files required for each test,
and these can easily be updated/maintained using Notepad or some such text-editor. Note
that updating these files does not require any knowledge of the automated tool, scripting,
programming, etc. meaning that the non-technical testers can perform this function, while
one "technical" tester can create and maintain the automated scripts.
It should be noticed that the "Subroutine" script, which verifies the Account
Summary and Transaction History, can also be used by other test cases and business
functions (which is why it is classified as a "Subroutine" script rather than a
"Business Function" script) – Payment reversals, for example. This means
that if we also need to perform 50 "payment reversals", we only need to develop three
additional scripts. - The "Driver" script
- The "Test Case" (Reverse a Payment & Verify Results) script
- The "Payment Reversal" Business Function script
Since we already had the original 4 scripts, we can quickly clone these three new
scripts from the originals (which takes hardly any time at all). We can use the
"Subroutine" script as-is without any modifications at all.
If different accounts need to be used, then all we
have to do is update the Data-Files, and not the actual scripts. It ought
to be obvious that this is a much more cost-effective method than the Record/Playback
method.
Advantages:- Utilizing a modular design, and using files or records to both input and verify data,
reduces redundancy and duplication of effort in creating automated test scripts.
- Scripts may be developed while application development is still in progress. If
functionality changes, only the specific "Business Function" script needs to be
updated.
- Since scripts are written to perform and test individual Business Functions, they can
easily be combined in a "higher level" test script in order to accommodate
complex test scenarios.
- Data input/output and expected results is stored as easily maintainable text records.
The user’s expected results are used for verification, which is a requirement
for System Testing.
- Functions return "TRUE" or "FALSE" values to the calling script,
rather than aborting, allowing for more effective error handling, and increasing the
robustness of the test scripts. This, along with a well-designed "recovery"
routine, enables "unattended" execution of test scripts.
Disadvantages: - Requires proficiency in the Scripting language used by the tool (technical personnel);
- Multiple data-files are required for each Test Case. There may be any number of
data-inputs and verifications required, depending on how many different screens are
accessed. This usually requires data-files to be kept in separate directories by
Test Case.
- Tester must not only maintain the Detail Test Plan with specific data, but must also
re-enter this data in the various required data-files.
- If a simple "text editor" such as Notepad is used to create and
maintain the data-files, careful attention must be paid to the format required by the
scripts/functions that process the files, or script-processing errors will occur due to
data-file format and/or content being incorrect.
|