Active TopicsActive Topics  Display List of Forum MembersMemberlist  CalendarCalendar  Search The ForumSearch  HelpHelp
  RegisterRegister  LoginLogin


 One Stop Testing ForumTypes Of Software Testing @ OneStopTestingAutomated Testing @ OneStopTesting

Message Icon Topic: Introduction to QA Testing Tools

Post Reply Post New Topic
Author Message
Mithi25
Senior Member
Senior Member
Avatar

Joined: 23Jun2009
Online Status: Offline
Posts: 288
Quote Mithi25 Replybullet Topic: Introduction to QA Testing Tools
    Posted: 30Oct2009 at 11:42pm

Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Commonly, test automation involves automating a manual process already in place that uses a formalized testing process. Testers are being asked to test more and more code
in less and less time. Test automation is one way to do this, as manual testing is time consuming. As different versions of software are released, the new features
will have to be tested manually time and again. But, now there are tools available
that help the testers in the automation of the GUI which reduce the test time as well as the cost; other test automation tools support execution of performance tests.

Many test automation tools provide record and playback features that allow users to record interactively user actions and replay it back any number of times, comparing actual results to those expected. However, reliance on these features poses major reliability and maintainability problems. Most successful automators use a software engineering approach, and as such people with development experience undertake most serious test automation.

A growing trend in software development is to use testing frameworks such as the xUnit frameworks (for example, JUnit and NUnit) that allow the code to conduct unit tests to determine whether various sections of the code are acting as expected under various circumstances. Test cases describe tests that need to be run on the program to verify that the program runs as expected. All three aspects of testing can be automated.

Another important aspect of test automation is the idea of partial test automation, or automating parts but not all of the software testing process. If, for example, an oracle cannot reasonably be created, or if fully automated tests would be too difficult to maintain, then a software tools engineer can instead create testing tools to help human testers perform their jobs more efficiently. Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped with oracles), defect logging, etc., without necessarily automating tests in an end-to-end fashion.

Test automation is expensive and it is an addition, not a replacement, to manual testing. It can be made cost-effective in the longer term though, especially in regression testing. One way to generate test cases automatically is model-based testing where a model of the system is used for test case generation, but research continues into a variety of methodologies for doing so.

Automated testing is the use of tools assist with a testing effort. There are many types of test automation, including automated unit testing, functional regression testing, and performance testing. Test automation
can be as simple as automating the creation of some of your data, or as complex as a series of scripts/programs that do parallel comparative testing of two or more systems with live data. A small
sampling of different types of automation available is listed below:

· Load and performance testing
· Installation and configuration testing
· Testing for race conditions
· Endurance testing
· Helping the development effort with smoke tests and unit tests
· Analyzing code coverage and runtime analysis
· Automation of test input generation
· Checking for coding standards and compliance
· Regression testing
· many more…

This article does not address the day-to-day details of test automation but focuses on developing a strategy for automation, putting together an automation team, and some simple first steps. It focuses on team makeup and coordination. Some of the steps will only be applicable for teams of more than one person. The steps discussed here apply primarily to teams of three or more people. If your automation effort has only one or two full-time automated testers there is still good information available to you here, though not all of it will apply.

The first step when getting started with automation is to ensure that you are setting the correct scope for your effort and that your efforts are focused on the right goals. In this next section, we will look at goals
for automation and how they will affect the rest of your decisions.

Setting Goals

Setting goals for automation can affect what you automate:
· Do you want to find bugs?
· Do you want to establish traceability for some sort of compliance?
· Do you want to support the development team?
· Do you want to establish scripts that allow you to ensure nothing changes in the software?

Use your goals to set your strategy and scope for your automation. If your goal is to support your developers, select tools and training to support that goal. You will need testers who know how to program
and who can communicate with developers in their language. They will need to look at source code, configure environments, and will probably spend a good deal of time working side-by-side with
developers debugging and troubleshooting. In contrast, if your main goal is to support refactoring and change, you will have a team with experience developing regression scripts and need to have team
members with both strong business knowledge and strong testing skills. They will need to know how to create scripts that look for changes and will need to know the business in order create scripts that look for
underlying changes that may be taking place.

By setting your goals, you can start small and keep things simple for your first-time automating. In future iterations or projects, you may want to go crazy and automate everything, but by
starting small now, you will minimize possible rework later when you start adding new tools or new automation techniques. When deciding what to automate for your first time, start with small milestones. For example:

· If you're testing a GUI or Web application, start with testing simple functionality. This could include verifying that all the correct controls exist on the screen, the proper fields enable/disable when actions are taken,      etc….
· If you're automating performance testing, start with just one virtual user and set your goal at a low number (no more than twenty). When you get one virtual user to work, double this and get two to work. Keep doubling this until you get to twenty. Each increment could present a new set of challenges.

Once you have determined your goals, you will need to start thinking about what kinds of skills you will need to have on your team. The next step addresses one of the most common oversights when staffing
an automated testing team. In your context, depending on your goals, you may need more developmentoriented automated testers, or you may need some other special skill.

Ideally have atleast one programmer on your team

To ensure efficient well-planned automated testing, you will need to have at least one experienced programmer in your testing-automation group. You'll soon find out that automated testing is code
development. If you will be doing a lot of scripting in your automation effort, most likely you will want to build modularity into your scripts. You will want to create some custom functions, extend the test tools,
and potentially read test data from databases or data files. Depending on the tool and language, encapsulation, object orientation, or some other method may be used to make the code more
maintainable or easier to use. If you are looking at providing code analysis services (performance profiling, code coverage, runtime analysis), you will need someone who knows how to read the codeunder-
test and who can ask the development team the right questions.

Even if the scripting environment you are using is not Java or C++ (which it may very well be), you're still building systems of scripts, data files, and libraries. Record-and-playback features only offer quick
solutions for the most common tasks and controls. For advanced automation of any kind or for any custom controls, you'll need to be able to write your own maintainable code. That means employing
programmers, not manual testers who learn to code as they go. Having said that, beware you don’t employ only programmers. All team members should be testers first. Skilled in the mental arts of testing
as well as armed with the ability to code and design test systems.

Once a decision has been made as to what tool will be used, it is important that the users of that tool know how to use it properly. The next step offers some places to look if you are still choosing
your tool, and offers some advice on ensuring the automation team knows how to use them.

Familiarity with the tools

Depending on what tools you already have in your arsenal and what types of automated testing you will be doing, this section may or may not apply to you. Most automated testing tools are designed for unit
testing, some sort of regression testing, or performance testing.

The scope of your automation effort will depend heavily on the tools you use and have access to. If you already have an investment in an enterprise test tool, you will probably want to leverage it. If you are
operating on a tight budget, you will look more at open source tools, shareware, and custom tool development and scripting (virtually any scripting language can be it’s own full-blown automation tool).
Below is a sampling of some of the different names commonly used to label test automation tools:

· Disk imaging tools
· File scanners
· Macro tools
· Memory monitors
· Environmental debuggers
· Requirements verifiers
· Test procedure generators
· Syntax checkers/debuggers
· Runtime error catchers
· Source code testing tools
· Environment testing tools
· Static and dynamic analyzers
· Unit test tools
· Code coverage tools
· Test data generators
· File comparison utilities
· Simulation tools
· Load/Performance testing tools
· Network testing tools
· Test management tools
· GUI testing tools

Regardless of what tool(s) you select, you will need to be sure that you spend time learning how to use them properly. Go through the tutorials that are provided and read through whatever documentation is
available. While the tutorials aren't the definitive guides as far as training is concerned, they do get you familiar with the software as well as any vendor specific terminology. Both of these are important. As
you'll soon find out most tools are large and complex with feature upon feature. The tutorials will familiarize you with the features you'll be using most.Once you know which tools you will be using and have a good idea of how you will be using them, you should start to think about setting some guidelines for how your team will use them together. The next step looks at setting up standards for tool use. These standards will make training easier, make it easier to move team members from project to project, and reduce the cost of ownership for the tools you choose.

Setting up standards

This is just as important in testing as it is in conventional software development. Your test system will develop more rapidly and will be easier to maintain if you establish and enforce naming standards, coding standards, environmental standards, and procedures for error and defect tracking. Having these standards documented will also allow people new to the project team to come up to speed faster.

Naming standards for scripts, test logs, directory structures, data structures, and verification points help to keep everyone on the same page.
Coding standards should also be developed and enforced. Once you've used them for a while, you can customize them to fit your needs and the needs of your team.
Environmental standards should ensure that the computers you use all have the same operating system, RAM, hard drive space, and installed software configurations. The only differences should be the specific differences you are looking for in your configuration testing (if applicable).
Procedures for error and defect tracking should describe how to log errors in test scripts, submit defects via your defect-tracking tool, code workarounds into scripts, and remove it all after a bug is resolved.

Document your team's standards, and be sure your team knows the standards and follows them. This step will also prepare you for the remaining steps as it will potentially provide a framework for decisionmaking
and review. Next is a step focused more on script-based automation, but the principle can still be applied to other types of automation.

Establishing Baselines

Once you've decided what you're testing, you should establish some simple baselines. As mentioned above this can be done using record-and-playback features, by performing some simple scripting, by
gathering small sets of runtime data, etc…. Whatever type of testing you’re implementing, figure out what the bare minimum tests are for that type of testing to be successful and implement them in very small,
very simple chunks. Remember that this is your first project and most likely your first time using these tools. You will not have sophisticated frameworks and test architectures in place yet. Those will come
later as your team matures. You will use these baselines as the foundation for what you will be implementing going forward. Hopefully, you will have some sort of archiving or source control and can
always come back to these simple tests if you need to.

Once you have established your baselines you will be better prepared to start looking at more advanced methods of scripting (or data gathering, or scenarios, etc…). This is important as the next two steps focus
on optimization of these baselines. As you mature in your processes and experience, you will be able to cut out this step, but be sure you are taking small steps right now.

Modularize the scripts

Now that you have your baseline scripts, grab a good software architect - or your team of testers and a large whiteboard. Start looking through the code in the scripts for repetitive calls or other common actions. What you're doing is looking for ways you can modularize your scripts. Ideally, you want to optimize your scripts so that maintenance is as easy as possible.

A script will cost you more to maintain than it will to create, unless you develop the script just as you would the software it's testing. Do it right the first time and reap the rewards in all of the following iterations of the project. After you've planned out what modularization you can do, implement it using whatever method your tools allow for the most reuse (libraries, objects,
classes, etc…). More than likely, you'll carry these over to following projects, and they'll evolve and change as you do.

For ideas on how to modularize your scripts, look at user communities for the tools you use, read literature on the topic from any of the major testing websites, talk with your development team, or hire in a
consultant to train your staff on what to look for. These resources are also good places to find information about the next step as well.

Use Data Structures and Data-Drive Techniques

Effective and cost-efficient automated testing is data-driven. Data-driven testing simply means that your test cases and test scripts are built around the data that will be entered into the application-under-test at runtime. That data is stored by some method and can be accessed by some key used in your scripts.

This method offers the greatest flexibility when it comes to developing workarounds for bugs and performing maintenance, and it allows for the fastest development of large sets of test cases. Most likely,
your tool will have some method for implementing this, or you can use a spreadsheet or a database. As a special bonus, once you get good at data-driven techniques, you can use automated methods to
generate some of that test data for you.

Documentation (which would make sense in future..)

Finally, document why you designed things the way you did. Document what each module does, and what each function in it does. All of this documentation is useful as training material or for future
reference, and it helps you keep track of lessons learned. Sometimes documentation is the only thing that can save project scripts that no one has worked on in a while.

Review

Just remember to follow this road map:

· Set goals for automation.
· Have at least one experienced programmer in your testing-automation group.
· Get familiar with your tools.
· Develop standards for your team.
· Establish some baselines.
· Modularize and build reusability and maintainability into your scripts.
· Use data-driven testing techniques whenever possible.
· Document what makes sense to document.

The clarity and simplicity of your goals, your understanding of the tools, and the makeup of your team. These are the factors that will determine your success.




Post Resume: Click here to Upload your Resume & Apply for Jobs

IP IP Logged
Post Reply Post New Topic
Printable version Printable version

Forum Jump
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot delete your posts in this forum
You cannot edit your posts in this forum
You cannot create polls in this forum
You cannot vote in polls in this forum



This page was generated in 0.797 seconds.
Vyom is an ISO 9001:2000 Certified Organization

© Vyom Technosoft Pvt. Ltd. All Rights Reserved.

Privacy Policy | Terms and Conditions
Job Interview Questions | Placement Papers | Free SMS | Freshers Jobs | MBA Forum | Learn SAP | Web Hosting