Active TopicsActive Topics  Display List of Forum MembersMemberlist  CalendarCalendar  Search The ForumSearch  HelpHelp
  RegisterRegister  LoginLogin


 One Stop Testing ForumSoftware Testing @ OneStopTestingBeginners @ OneStopTesting

Message Icon Topic: FAQ's about Software Testing

Post Reply Post New Topic
Author Message
poli
Senior Member
Senior Member
Avatar

Joined: 03Apr2007
Online Status: Offline
Posts: 462
Quote poli Replybullet Topic: FAQ's about Software Testing
    Posted: 27Aug2007 at 11:27pm

1. What is testing?

Software Testing can be defined as: Testing is an activity that helps in finding out bugs/defects/errors in a software system under development, in order to provide a bug free and reliable system/solution to the customer.

In other words, you can consider an example as: suppose you are a good cook and are expecting some guests at dinner. You start making dinner; you make few very very very delicious dishes (off-course, those which you already know how to make). And finally, when you are about to finish making the dishes, you ask someone (or you yourself) to check if everything is fine and there is no extra salt/chili/anything, which if is not in balance, can ruin your evening (This is what called 'TESTING').

This procedure you follow in order to make it sure that you do not serve your guests something that is not tasty! Otherwise your collar will go down and you will regret over your failure!


2. Why we go for testing?

Well, while making food, its ok to have something extra, people might understand and eat the things you made and may well appreciate your work. But this isn't the case with Software Project Development. If you fail to deliver a reliable, good and problem free software solution, you fail in your project and probably you may loose your client. This can get even worse!

So in order to make it sure, that you provide your client a proper software solution, you go for TESTING. You check out if there is any problem, any error in the system, which can make software unusable by the client. You make software testers test the system and help in finding out the bugs in the system to fix them on time. You find out the problems and fix them and again try to find out all the potential problems.


3. Why there is need of testing?

OR

Why there is a need of 'independent/separate testing'?

This is a right question because, prior to the concept of TESTING software as a ‘Testing Project’, the testing process existed, but the developer(s) did that at the time of development.

But you must know the fact that, if you make something, you hardly feel that there can be something wrong with what you have developed. It's a common trait of human nature, we feel that there is no problem in our designed system as we have developed it and it is perfectly functional and fully working. So the hidden bugs or errors or problems of the system remain hidden and they raise their head when the system goes into production.

On the other hand, its a fact that, when one person starts checking something which is made by some other person, there are 99% chances that checker/observer will find some problem with the system (even if the problem is with some spelling that by mistake has been written in wrong way.). Really weird, isn't it? But that’s a truth!

Even though its wrong in terms of human behavior, this thing has been used for the benefit of software projects (or you may say, any type of project). When you develop something, you give it to get checked (TEST) and to find out any problem, which never aroused while development of the system. Because, after all, if you could minimize the problems with the system you developed, it’s beneficial for yourself. Your client will be happy if your system works without any problem and will generate more revenues for you.

BINGO, it's really great, isn't it? That's why we need testing!

4. What is the role of "a tester"?

A tester is a person who tries to find out all possible errors/bugs in the system with the help of various inputs to it. A tester plays an important part in finding out the problems with system and helps in improving its quality.

If you could find all the bugs and fix them all, your system becomes more and more reliable.

A tester has to understand the limits, which can make the system break and work abruptly. The more number of VALID BUGS tester finds out, the better tester he is.

Running the Automated Tests

This concept is pretty basic, if you want to test the submit button a login page, you can override the system and programmatically move the mouse to a set of screen coordinates, then send a click event. There is another much trickier way to do this. You can directly call the internal API that the button click event handler calls. Calling into the API is good because it's easy. Calling an API function from your test code is a piece of cake, just add in a function call. But then you aren't actually testing the UI of your application. Sure, you can call the API for functionality testing, then every now and then click the button manually to be sure the right dialog opens.


Rationally this really should work great, but a lot of testing exists outside the rational space. There might be lots of bugs that happen when the user goes through the button instead of directly calling the API. And here's the critical part – almost all of your users will use your software through the UI, not the API. So those bugs you miss by just going through the API will be high exposure bugs. These won't happen all the time, but they're the kind of things you really don't want to miss, especially if you were counting on your automation to be testing that part of the program.


If your automation is through the API, then this way you're getting no testing coverage on your UI. And you'll have to do that by hand.


Simulating the mouse is good because it's working the UI the whole time, but it has its own set of problems. The real issue here is reliability. You have to know the coordinates that you're trying to click before hand. This is doable, but lots of things can make those coordinated change at runtime. Is the window maximized? What's the screen resolution? Is the start menu on the bottom or the left side of the screen? Did the last guy rearrange the toolbars? And what suppose the application is used by users of Arabic language, where the display is from right to left? These are all things that will change the absolute location of your UI.


The good news is there are tricks around a lot of these issues. The first key is to always run at the same screen resolution on all your automated test systems (note: there are bugs we could be missing here, but we won't worry about that now - those are beyond the scope of our automation anyway.) We also like to have our first automated test action by maximizing the program. This takes care of most of the big issues, but small things can still come up.


The really sophisticated way to handle this is to use relative positioning. If your developers are nice they can build in some test hooks for you so you can ask the application where it is. This even works for child windows, you can ask a toolbar where it is. If you know that the 'file -> new' button is always at (x, y) inside the main toolbar it doesn't matter if the application is maximized or if the last user moved all the toolbars around. Just ask the main toolbar where it is and tack on (x, y) and then click there.

So this has an advantage over just exercising the APIs since you're using the UI too, but it has a disadvantage too – it involves a lot of work.


Results Verification

So we have figured out the right way to run the tests, and we have this great test case, but after we have told the program to do stuff, we need to have a way to know if it did the right thing. This is the verification step in our automation, and every automated script needs this.

We have many options
· Verify the results manually
· Verify the results programmatically
· Use some kind of visual comparison tool.


The first method is to do it ourselves, that is by manually verifying the results and see that it meets our expectations.


The second way is of course to verify it programmatically. In this method, we can have a predefined set of expected results(baseline), which can be compared with the obtained results. The output of this would be whether a test case passed or failed. There are many ways by which we can achieve this; we can hard code the expected results in the program/script. We can also store the expected the results in a particular file or a text file or a properties file or a xml file and read the expected results from this file and compare with the obtained result.


The other way is to just grab a bitmap of the screen and save it off somewhere. Then we can use a visual comparison tool to compare it with the expected bitmap files. Using a visual comparison tool is clearly the best, but also the hardest. Here your automated test gets to a place where it wants to check the state, looks at the screen, and compares that image to a master it has stored away some where.

What is Load Testing?

Definition of Load Testing

- Load testing is the act of testing a system under load.

Load testing is usually carried out to a load 1.5x the SWL (Safe Working Load) periodic recertification is required.

Load testing is a way to test performance of an application/product.

In software engineering it is a blanket term that is used in many different ways across the professional software testing community.

Testing an application under heavy but expected loads is known as load testing. It generally refers to the practice of modeling the expected usage of a software program by simulating multiple users accessing the system's services concurrently. As such, load testing is most relevant for a multi-user system, often one built using a client/server model, such as a web server tested under a range of loads to determine at what point the system's response time degrades or fails. Although you could perform a load test on a word processor by or graphics editor forcing it read in an extremely large document; on a financial package by forcing to generate a report based on several years' worth of data, etc.

There is little agreement on what the specific goals of load testing are. The term is often used synonymously with performance testing, reliability testing, and volume testing.


Why is load testing important?


Increase uptime and availability of the system

Load testing increases your uptime of your mission-critical systems by helping you spot bottlenecks in your systems under large user stress scenarios before they happen in a production environment.

Measure and monitor performance of your system

Make sure that your system can handle the load of thousands of concurrent users.

Avoid system failures by predicting the behavior under large user loads

It is a failure when so much effort is put into building a system only to realize that it won't scale anymore. Avoid project failures due to not testing high-load scenarios.

Protect IT investments by predicting scalability and performance

Building a product is very expensive. The hardware, the staffing, the consultants, the bandwidth, and more add up quickly. Avoid wasting money on expensive IT resources and ensure that the system will all scale with load testing.

Definition of a Test Plan

A test plan can be defined as a document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.


In software testing, a test plan gives detailed testing information regarding an upcoming testing effort, including

* Scope of testing
* Schedule
* Test Deliverables
* Release Criteria
* Risks and Contingencies

It is also be described as a detail of how the testing will proceed, who will do the testing, what will be tested, in how much time the test will take place, and to what quality level the test will be performed.

Few other definitions –

The process of defining a test project so that it can be properly measured and controlled. The test planning process generates a high level test plan document that identifies the software items to be tested, the degree of tester independence, the test environment, the test case design and test measurement techniques to be used, and the rationale for their choice.


A testing plan is a methodological and systematic approach to testing a system such as a machine or software. It can be effective in finding errors and flaws in a system. In order to find relevant results, the plan typically contains experiments with a range of operations and values, including an understanding of what the eventual workflow will be.


Test plan is a document which includes, introduction, assumptions, list of test cases, list of features to be tested, approach, deliverables, resources, risks and scheduling.


A test plan is a systematic approach to testing a system such as a machine or software. The plan typically contains a detailed understanding of what the eventual workflow will be.


A record of the test planning process detailing the degree of tester indedendence, the test environment, the test case design techniques and test measurement techniques to be used, and the rationale for their choice.

Database Operations Tests:

Testing J2EE applications is often difficult and time-consuming at best, but testing the database operations portion of a J2EE application is especially challenging. Database operations tests must be able to catch logic errors—when a query returns the wrong data, for example, or when an update changes the database state incorrectly or in unexpected ways.

For example, say you have a USER class that represents a single USER table and that database operations on the USER table are encapsulated by a Data Access Object (DAO), UserDAO, as follows:

public interface UserDAO {

/** * Returns the list of users

* with the given name

* or at least the minimum age

*/

public List listUsers(String name, Integer minimumAge);

/**

* Returns all the users

* in the database

*/

public List listAllUsers();

}

In this simple UserDAO interface, the listUsers() method should return all rows (from the USER table) that have the specified name or the specified value for minimum age. A test to determine whether you've correctly implemented this method in your own classes must take into consideration several questions:

* Does the method call the correct SQL (for JDBC applications) or the correct query-filtering expression (for object role modeling [ORM]-based applications)?
* Is the SQL- or query- filtering expression correctly written, and does it return the correct number of rows?
* What happens if you supply invalid parameters? Does the method behave as expected? Does it handle all boundary conditions appropriately?
* Does the method correctly populate the users list from the result set returned from the database?

Thus, even a simple DAO method has a host of possible outcomes and error conditions, each of which should be tested to ensure that an application works correctly. And in most cases, you'll want the tests to interact with the database and use real data—tests that operate purely at the individual class level or use mock objects to simulate database dependencies will not suffice. Database testing is equally important for read/write operations, particularly those that apply many changes to the database, as is often the case with PL/SQL stored procedures.

The bottom line: Only through a solid regime of database tests can you verify that these operations behave correctly.

The best practices in this article pertain specifically to designing tests that focus on these types of data access challenges. The tests must be able to raise nonobvious errors in data retrieval and modification that can occur in the data access abstraction layer. The article's focus is on database operations tests—tests that apply to the layer of the J2EE application responsible for persistent data access and manipulation. This layer is usually encapsulated in a DAO that hides the persistence mechanism from the rest of the application.

Best Practices

The following are some of the testing best practices:

Practice 1: Start with a "testable" application architecture.
Practice 2: Use precise assertions.
Practice 3: Externalize assertion data.
Practice 4: Write comprehensive tests.
Practice 5: Create a stable, meaningful test data set.
Practice 6: Create a dedicated test library.
Practice 7: Isolate tests effectively.
Practice 8: Partition your test suite.
Practice 9: Use an appropriate framework, such as DbUnit, to facilitate the process.

 

What is BugZilla?

BugZilla is a bug tracking system(also called as issue tracking system).

Bug tracking systems allow individual or group of developers effectively to keep track of outstanding problems with their product. Bugzilla was originally in a programming language called TCL, to replace a rudimentary bug-tracking database used internally by Netscape Communications. Terry later ported Bugzilla to Perl from TCL, and in Perl it remains to this day. Most commercial defect-tracking software vendors at the time charged enormous licensing fees, and Bugzilla quickly became a favorite of the open-source crowd (with its genesis in the open-source browser project, Mozilla). It is now the de-facto standard defect-tracking system against which all others are measured.

Bugzilla boasts many advanced features. These include:

*

Powerful searching
*

User-configurable email notifications of bug changes
*

Full change history
*

Inter-bug dependency tracking and graphing
*

Excellent attachment management
*

Integrated, product-based, granular security schema
*

Fully security-audited, and runs under Perl's taint mode
*

A robust, stable RDBMS back-end
*

Web, XML, email and console interfaces
*

Completely customisable and/or localisable web user interface
*

Extensive configurability
*

Smooth upgrade pathway between versions

Why Should We Use Bugzilla?

For many years, defect-tracking software has remained principally the domain of large software development houses. Even then, most shops never bothered with bug-tracking software, and instead simply relied on shared lists and email to monitor the status of defects. This procedure is error-prone and tends to cause those bugs judged least significant by developers to be dropped or ignored.

These days, many companies are finding that integrated defect-tracking systems reduce downtime, increase productivity, and raise customer satisfaction with their systems. Along with full disclosure, an open bug-tracker allows manufacturers to keep in touch with their clients and resellers, to communicate about problems effectively throughout the data management chain. Many corporations have also discovered that defect-tracking helps reduce costs by providing IT support accountability, telephone support knowledge bases, and a common, well-understood system for accounting for unusual system or software issues.

But why should you use Bugzilla?

Bugzilla is very adaptable to various situations. Known uses currently include IT support queues, Systems Administration deployment management, chip design and development problem tracking (both pre-and-post fabrication), and software and hardware bug tracking for luminaries such as Redhat, NASA, Linux-Mandrake, and VA Systems. Combined with systems such as CVS, Bonsai, or Perforce SCM, Bugzilla provides a powerful, easy-to-use solution to configuration management and replication problems.

Bugzilla can dramatically increase the productivity and accountability of individual employees by providing a documented workflow and positive feedback for good performance. How many times do you wake up in the morning, remembering that you were supposed to do something today, but you just can't quite remember? Put it in Bugzilla, and you have a record of it from which you can extrapolate milestones, predict product versions for integration, and follow the discussion trail that led to critical decisions.

Ultimately, Bugzilla puts the power in your hands to improve your value to your employer or business while providing a usable framework for your natural attention to detail and knowledge store to flourish.

Important Considerations for Test Automation

Often when a test automation tool is introduced to a project, the expectations for the return on investment are very high. Project members anticipate that the tool will immediately narrow down the testing scope, meaning reducing cost and schedule. However, I have seen several test automation projects fail - miserably.

The following very simple factors largely influence the effectiveness of automated testing, and if not taken into account, the results is usually a lot of lost effort, and very expensive ‘shelfware’.

1. Scope - It is not practical to try to automate everything, nor is there the time available generally. Pick very carefully the functions/areas of the application that are to be automated.

2. Preparation Timeframe - The preparation time for automated test scripts has to be taken into account. In general, the preparation time for automated scripts can be up to 2/3 times longer than for manual testing. In reality, chances are that initially the tool will actually increase the testing scope. It is therefore very important to manage expectations. An automated testing tool does not replace manual testing, nor does it replace the test engineer. Initially, the test effort will increase, but when automation is done correctly it will decrease on subsequent releases.

3. Return on Investment - Because the preparation time for test automation is so long, I have heard it stated that the benefit of the test automation only begins to occur after approximately the third time the tests have been run.

4. When is the benefit to be gained? Choose your objectives wisely, and seriously think about when & where the benefit is to be gained. If your application is significantly changing regularly, forget about test automation - you will spend so much time updating your scripts that you will not reap many benefits. [However, if only disparate sections of the application are changing, or the changes are minor - or if there is a specific section that is not changing, you may still be able to successfully utilise automated tests]. Bear in mind that you may only ever be able to do a complete automated test run when your application is almost ready for release – i.e. nearly fully tested!! If your application is very buggy, then the likelihood is that you will not be able to run a complete suite of automated tests – due to the failing functions encountered.

5. The Degree of Change – The best use of test automation is for regression testing, whereby you use automated tests to ensure that pre-existing functions (e.g. functions from version 1.0 - i.e. not new functions in this release) are unaffected by any changes introduced in version 1.1. And, since proper test automation planning requires that the test scripts are designed so that they are not totally invalidated by a simple gui change (such as renaming or moving a particular control), you need to take into account the time and effort required to update the scripts. For example, if your application is significantly changing, the scripts from version 1.0. may need to be completely re-written for version 1.1., and the effort involved may be at most prohibitive, at least not taken into account! However, if only disparate sections of the application are changing, or the changes are minor, you should be able to successfully utilise automated tests to regress these areas.

6. Test Integrity - how do you know (measure) whether a test passed or failed ? Just because the tool returns a ‘pass’ does not necessarily mean that the test itself passed. For example, just because no error message appears does not mean that the next step in the script successfully completed. This needs to be taken into account when specifying test script fail/pass criteria.

7. Test Independence - Test independence must be built in so that a failure in the first test case won't cause a domino effect and either prevent, or cause to fail, the rest of the test scripts in that test suite. However, in practice this is very difficult to achieve.

8. Debugging or "testing" of the actual test scripts themselves - time must be allowed for this, and to prove the integrity of the tests themselves.

9. Record & Playback - DO NOT RELY on record & playback as the SOLE means to generates a script. The idea is great. You execute the test manually while the test tool sits in the background and remembers what you do. It then generates a script that you can run to re-execute the test. It's a great idea - that rarely works (and proves very little).

10. Maintenance of Scripts - Finally, there is a high maintenance overhead for automated test scripts - they have to be continuously kept up to date, otherwise you will end up abandoning hundreds of hours work because there has been too many changes to an application to make modifying the test script worthwhile. As a result, it is important that the documentation of the test scripts is kept up to date also.

What is the goal of Software Testing?
* Demonstrate That Faults Are Not Present
* Find Errors
* Ensure That All The Functionality Is Implemented
* Ensure The Customer Will Be Able To Get His Work Done

Modes of Testing
* Static Static Analysis doesn¡¦t involve actual program execution. The code is examined, it is tested without being executed Ex: - Reviews
* Dynamic In Dynamic, The code is executed. Ex:- Unit testing

Testing methods
* White box testing Use the control structure of the procedural design to derive test cases.
* Black box testing Derive sets of input conditions that will fully exercise the functional requirements for a program.
* Integration Assembling parts of a system

Verification and Validation
* Verification: Are we doing the job right? The set of activities that ensure that software correctly implements a specific function. (i.e. The process of determining whether or not products of a given phase of the software development cycle fulfill the requirements established during previous phase). Ex: - Technical reviews, quality & configuration audits, performance monitoring, simulation, feasibility study, documentation review, database review, algorithm analysis etc
* Validation: Are we doing the right job? The set of activities that ensure that the software that has been built is traceable to customer requirements.(An attempt to find errors by executing the program in a real environment ). Ex: - Unit testing, system testing and installation testing etc

What's a 'test case'?
A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results

What is a software error ?
A mismatch between the program and its specification is an error in the program if and only if the specifications exists and is correct.

Risk Driven Testing
What if there isn't enough time for thorough testing?
Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience.

Considerations can include:
- Which functionality is most important to the project's intended purpose?
- Which functionality is most visible to the user?
- Which aspects of the application are most important to the customer?
- Which parts of the code are most complex, and thus most subject to errors?
- What do the developers think are the highest-risk aspects of the application?
- What kinds of tests could easily cover multiple functionality?
Whenever there's too much to do and not enough time to do it, we have to prioritize so that at least the most important things get done. So prioritization has received a lot of attention. The approach is called Risk Driven Testing. Here's how you do it: Take the pieces of your system, whatever you use - modules, functions, section of the requirements - and rate each piece on two variables, Impact and Likelihood.


Risk has two components: Impact and Likelihood

Impact
is what would happen if this piece somehow malfunctioned. Would it destroy the customer database? Or would it just mean that the column headings in a report didn't quite line up?

Likelihood
is an estimate of how probable it is that this piece would fail. Together, Impact and Likelihood determine Risk for the piece.


Test Planning

What is a test plan?
A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product.

Elements of test planning
* Establish objectives for each test phase
* Establish schedules for each test activity
* Determine the availability of tools, resources
* Establish the standards and procedures to be used for planning and conducting the tests and reporting test results
* Set the criteria for test completion as well as for the success of each test

The Structured Approach to Testing

Test Planning
* Define what to test
* Identify Functions to be tested
* Test conditions
* Manual or Automated
* Prioritize to identify Most Important Tests
* Record Document References

Test Design
* Define how to test
* Identify Test Specifications
* Build detailed test scripts
* Quick Script generation
* Documents

Test Execution
* Define when to test
* Build test execution schedule
* Record test results


Bug Overview

What is a software error?
A mismatch between the program and its specification is an error in the Program if and only if the specification exists and is correct.
Example: -
* The date on the report title is wrong
* The system hangs if more than 20 users try to commit at the same time
* The user interface is not standard across programs

Categories of Software errors
* User Interface errors
* Functionality errors
* Performance errors
* Output errors
* documentation errors

What Do You Do When You Find a Bug?
IF A BUG IS FOUND,
* alert the developers that a bug exists
* show them how to reproduce the bug
* ensure that if the developer fixes the bug it is fixed correctly and the fix
* didn't break anything else
* keep management apprised of the outstanding bugs and correction trends

Bug Writing Tips
Ideally you should be able to write bug report clearly enough for a developer to reproduce and fix the problem, and another QA engineer to verify the fix without them having to go back to you, the author, for more information.
To write a fully effective report you must :-
* Explain how to reproduce the problem
* Analyze the error so you can describe it in a minimum number of steps
* Write a report that is complete and easy to understand


Product Test Phase - Product Testing Cycle

Pre-Alpha
Pre-Alpha is the test period during which QA, Information Development and other internal users make the product available for internal testing.
Alpha

Alpha is the test period during which the product is complete and usable in a test environment but not necessarily bug-free. It is the final chance to get verification from customers that the tradeoffs made in the final development stage are coherent.
Entry to Alpha
* All features complete/testable (no urgent bugs or QA blockers)
* High bugs on primary platforms fixed/verified
* 50% of medium bugs on primary platforms fixed/verified
* All features tested on primary platforms
* Alpha sites ready for install
* Final product feature set Determined

Beta
Beta is the test period during which the product should be of "FCS quality" (it is complete and usable in a production environment). The purpose of the Beta ship and test period is to test the company's ability to deliver and support the product (and not to test the product itself). Beta also serves as a chance to get a final "vote of confidence" from a few customers to help validate our own belief that the product is now ready for volume shipment to all customers.
Entry to Beta

* At least 50% positive response from Alpha sites
* All customer bugs addressed via patches/drops in Alpha
* All bugs fixed/verified
* Bug fixes regression tested
* Bug fix rate exceeds find rate consistently for two weeks
* Beta sites ready for install

GM (Golden Master)
GM is the test period during which the product should require minimal work, since everything was done prior to Beta. The only planned work should be to revise part numbers and version numbers, prepare documentation for final printing, and sanity testing of the final bits.
Entry to Golden Master

* Beta sites declare the product is ready to ship
* All customer bugs addressed via patches/drops in Beta
* All negative responses from sites tracked and evaluated
* Support declares the product is supportable/ready to ship
* Bug find rate is lower than fix rate and steadily decreasing

FCS (First Customer Ship)
FCS is the period which signifies entry into the final phase of a project. At this point, the product is considered wholly complete and ready for purchase and usage by the customers.
Entry to FCS

* Product tested for two weeks with no new urgent bugs
* Product team declares the product is ready to ship

Pros of Automation

1. If you have to run a set of tests repeatedly, automation is a huge win for you

2. It gives you the ability to run automation against code that frequently changes to catch regressions in a timely manner

3. It gives you the ability to run automation in mainstream scenarios to catch regressions in a timely manner (see What is a Nighlty)

4. Aids in testing a large test matrix (different languages on different OS platforms). Automated tests can be run at the same time on different machines, whereas the manual tests would have to be run sequentially.

Cons of Automation

1. It costs more to automate. Writing the test cases and writing or configuring the automate framework you’re using costs more initially than running the test manually.

2. Can’t automate visual references, for example, if you can’t tell the font color via code or the automation tool, it is a manual test.

Pros of Manual

1. If the test case only runs twice a coding milestone, it most likely should be a manual test. Less cost than automating it.

2. It allows the tester to perform more ad-hoc (random testing). In my experiences, more bugs are found via ad-hoc than via automation. And, the more time a tester spends playing with the feature, the greater the odds of finding real user bugs.

Cons of Manual

1. Running tests manually can be very time consuming

2. Each time there is a new build, the tester must rerun all required tests - which after a while would become very mundane and tiresome.

Other deciding factors

1. What you automate depends on the tools you use. If the tools have any limitations, those tests are manual.

2. Is the return on investment worth automating? Is what you get out of automation worth the cost of setting up and supporting the test cases, the automation framework, and the system that runs the test cases?

Criteria for automating

There are two sets of questions to determine whether automation is right for your test case:

Is this test scenario automatable?

1. Yes, and it will cost a little

2. Yes, but it will cost a lot

3. No, it is no possible to automate

How important is this test scenario?

1. I must absolutely test this scenario whenever possible

2. I need to test this scenario regularly

3. I only need to test this scenario once in a while

If you answered #1 to both questions – definitely automate that test
If you answered #1 or #2 to both questions – you should automate that test
If you answered #2 to both questions – you need to consider if it is really worth the investment to automate

What happens if you can’t automate?

Let’s say that you have a test that you absolutely need to run whenever possible, but it isn’t possible to automate. Your options are

1. Reevaluate – do I really need to run this test this often?

2. What’s the cost of doing this test manually?

3. Look for new testing tools

4. Consider test hooks

Black Box Testing


Black box testing or functional testing is used to check that the outputs of a program, given certain inputs, conform to the functional specification of the program.

The term black box indicates that the internal implementation of the program being executed is not examined by the tester. For this reason black box testing is not normally carried out by the programmer.

Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program actually arrives at those outputs. It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program is necessary.
For this reason, the tester and the programmer can be independent of one another, avoiding programmer bias toward his own work. For this testing, test groups are often used, "Test groups are sometimes called professional idiots...people who are good at designing incorrect data."

The opposite of this would be glass box testing, where test data are derived from direct examination of the code to be tested. For glass box testing, the test cases cannot be determined until the code has actually been written. Both of these testing techniques have advantages and disadvantages, but when combined, they help to ensure thorough testing of the product.
Also, do to the nature of black box testing, the test planning can begin as soon as the specifications are written.


Advantages of Black Box Testing
There are many advantages of Black Box Testing...

1. More effective on larger units of code than glass box testing

2. The tester needs no knowledge of implementation, including specific programming languages

3. The tester and programmer are independent of each other

4. Tests are done from a user's point of view

5. Will help to expose any ambiguities or inconsistencies in the specifications

6. Test cases can be designed as soon as the specifications are complete

7. The test is unbiased because the designer and the tester are independent of each other.

8. The tester does not need knowledge of any specific programming languages.

9. The test is done from the point of view of the user, not the designer.

10. Test cases can be designed as soon as the specifications are complete.



Disadvantages of Black Box Testing


1. Only a small number of possible inputs can actually be tested, to test every possible input stream would take nearly forever

2. Without clear and concise specifications, test cases are hard to design

3. There may be unnecessary repetition of test inputs if the tester is not informed of test cases the programmer has already tried

4. May leave many program paths untested

5. Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone)

6. Most testing related research has been directed toward glass box testing

7. The test can be redundant if the software designer has already run a test case.

8. The test cases are difficult to design.

9. Testing every possible input stream is unrealistic because it would take a inordinate amount of time; therefore, many program paths will go untested.



Testing Strategies/Techniques

1. Black box testing should make use of randomly generated inputs (only a test range should be specified by the tester), to eliminate any guess work by the tester as to the methods of the function

2. Data outside of the specified input range should be tested to check the robustness of the program

3. Boundary cases should be tested (top and bottom of specified range) to make sure the highest and lowest allowable inputs produce proper output

4. The number zero should be tested when numerical data is to be input

5. Stress testing should be performed (try to overload the program with inputs to see where it reaches its maximum capacity), especially with real time systems

6. Crash testing should be performed to see what it takes to bring the system down

7. Test monitoring tools should be used whenever possible to track which tests have already been performed and the outputs of these tests to avoid repetition and to aid in the software maintenance

8. Other functional testing techniques include: transaction testing, syntax testing, domain testing, logic testing, and state testing.

9. Finite state machine models can be used as a guide to design functional tests

10. According to Beizer the following is a general order by which tests should be designed:

11. Clean tests against requirements.

12. Additional structural tests for branch coverage, as needed.

13. Additional tests for data-flow coverage as needed.

14. Domain tests not covered by the above.

15. Special techniques as appropriate--syntax, loop, state, etc.

16. Any dirty tests not covered by the above.

Glass box testing requires the intimate knowledge of program internals, while black box testing is based solely on the knowledge of the system requirements. Being primarily concerned with program internals, it is obvious in SE literature that the primary effort to develop a testing methodology has been devoted to glass box tests. However, since the importance of black box testing has gained general acknowledgement, also a certain number of useful black box testing techniques were developed.
It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the tests once they're implemented.

Advantages

1. Forces test developer to reason carefully about implementation

2. Approximates the partitioning done by execution equivalence

3. Reveals errors in "hidden" code:

4. Beneficent side-effects

5. Optimizations (e.g. charTable that changes reps when size > 100)


Disadvantages

1. Expensive

2. Miss cases omitted in the code


Unit, Component and Integration testing



Unit - The smallest compilable component. A unit typically is the
work of one programmer (At least in principle). As defined, it does
not include any called sub-components (for procedural languages) or
communicating components in general.

Unit Testing: in unit testing called components (or communicating
components) are replaced with stubs, simulators, or trusted
components. Calling components are replaced with drivers or trusted
super-components. The unit is tested in isolation.

Component: a unit is a component. The integration of one or more
components is a component.

Note: The reason for "one or more" as contrasted to "Two or
more" is to allow for components that call themselves
recursively.

Component testing: the same as unit testing except that all stubs
and simulators are replaced with the real thing.

Two components (actually one or more) are said to be integrated when:
a. They have been compiled, linked, and loaded together.
b. They have successfully passed the integration tests at the
interface between them.

Thus, components A and B are integrated to create a new, larger,
component (A,B). Note that this does not conflict with the idea of
incremental integration -- it just means that A is a big component
and B, the component added, is a small one.

Integration testing: carrying out integration tests.

Integration tests (After Leung and White) for procedural languages.
This is easily generalized for OO languages by using the equivalent
constructs for message passing. In the following, the word "call"
is to be understood in the most general sense of a data flow and is
not restricted to just formal subroutine calls and returns -- for
example, passage of data through global data structures and/or the
use of pointers.

Let A and B be two components in which A calls B.
Let Ta be the component level tests of A
Let Tb be the component level tests of B
Tab The tests in A's suite that cause A to call B.
Tbsa The tests in B's suite for which it is possible to sensitize A
-- the inputs are to A, not B.
Tbsa + Tab == the integration test suite (+ = union).

Note: Sensitize is a technical term. It means inputs that will
cause a routine to go down a specified path. The inputs are to
A. Not every input to A will cause A to traverse a path in
which B is called. Tbsa is the set of tests which do cause A to
follow a path in which B is called. The outcome of the test of
B may or may not be affected.

There have been variations on these definitions, but the key point is
that it is pretty darn formal and there's a goodly hunk of testing
theory, especially as concerns integration testing, OO testing, and
regression testing, based on them.

As to the difference between integration testing and system testing.
System testing specifically goes after behaviors and bugs that are
properties of the entire system as distinct from properties
attributable to components (unless, of course, the component in
question is the entire system). Examples of system testing issues:
resource loss bugs, throughput bugs, performance, security, recovery,
transaction synchronization bugs (often misnamed "timing bugs").

Alpha Test
The part of the Test Phase of the PLC where code is complete and the product has achieved a degree of stability. The product is fully testable (determined by QA). All functionality has been implemented and QA has finished the implementation of the test plans/cases. Ideally, this when development feels the product is ready to be shipped.

Automated Testing
Creation of individual tests created to run without direct tester intervention.

Beta Test
The part of the Test Phase of the PLC where integration testing plans are finished, depth testing coverage goals met; Ideally, QA says product is ready to ship. The product is stable enough for external testing (determined by QA).

Black Box Test

Tests in which the software under test is treated as a black box. You can't "see" into it. The test provides inputs and responds to outputs without considering how the software works.

Boundary Testing
Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

Breadth Testing
Matrix tests which generally cover all product components and functions on an individual basis. These are usually the first automated tests available after the functional specifications have been completed and test plans have been drafted.

Breath Testing
Generally a good thing to do after eating garlic and before going out into public. Or you may have to take a breath test if you're DUI.

Bug
A phenomenon with an understanding of why it happened.

Code Complete
Phase of the PLC where functionality is coded in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

Code Freeze
When development has finished all new functional code. This is when development is in a "bug fixing" stage.

Coding Phase
Phase of the PLC where development is coding product to meet Functional/Architectural Specifications. QA develops test tools and test cases during this phase.

Compatibility Test
Tests that check for compatibility of other software or hardware with the software being tested.

Concept Phase

Phase of the PLC where an idea for a new product is developed and a preliminary definition of the product is established. Research plans should be put in place and an initial analysis of the competition should be completed. The main goal of this phase is to determine product viability and obtain funding for further research.

Coverage analysis

Shows which functions (i.e., GUI and C code level) have been touched and which have not.

Data Validation
Verification of data to assure that it is still correct.

Debug
To search for and eliminate malfunctioning elements or errors in the software.

Definition Phase

See Design Phase.

Dependency
This is when a component of a product is dependent on an outside group. The delivery of the product or the reaching a certain milestone is affected.

Depth Testing
Encompasses Integration testing, real world testing, combinatorial testing, Interoperability and compatibility testing.

Design Phase
Phase of the PLC where functions of the product are written down. Features and requirements are defined in this phase. Each department develops their departments' plan and resource requirements for the product during this phase.

Dot Release
A major update to a product.

Feature
A bug that no one wants to admit to.

Focus
The center of interest or activity. In software, focus refers to the area of the screen where the insertion point is active.

Functional

Phase of the PLC defining modules.

Specifications

Their implementation requirements and approach, and exposed API. Each function is specified here. This includes the expected results of each function.

GM
See Green Master.

Green Master (GM)
Phase of the PLC where the certification stage begins. All bugs, regressed against the product, must pass. Every build is a release candidate (determined by development).

GUI
Graphical User Interface.

Integration Testing
Depth testing which covers groups of functions at the subsystem level.

Interoperability Test
Tests that verify operability between software and hardware.

Load Test
Load tests study the behavior of the program when it is working at its limits. Types of load tests are Volume tests, Stress tests, and Storage tests.

Localization
This term refers to making software specifically designed for a specific locality.

Maintenance Release
See Inline.

Metrics
A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

Milestones
Events in the Product Life Cycle which define particular goals.

Performance Test
Test that measures how long it takes to do a function.

Phenomenon
A flaw without an understanding.

PLC
Product Life Cycle - see Software Product Life Cycle.

Pre-Alpha
Pre-build 1; product definition phase. (Functional Specification may still be in process of being created).

Product Life Cycle
The stages a product goes through.

(PLC)
from conception to completion. Phases of product development includes: Definition Phase, Functional/Architectural Specification Phase, Coding Phase, Code Complete Phase, Alpha, Beta, Zero Bug Build Phase, Green Master Phase, STM, and Maintenance/Inline Phase.

Proposal Phase
Phase of the PLC where the product must be defined with a prioritized feature list and system and compatibility requirements.

QA Plan
A general test plan given at the macro level which defines the activities of the test team through the stages of the Product Life Cycle.

Real World Testing
Integration testing which attempt to create environments which mirror how the product will be used in the "real world".

Regression Testing
Retesting bugs in the system which had been identified as fixed, usually starting from Alpha on.

Resource
People, software, hardware, tools, etc. that have unique qualities and talents that can be utilized for a purpose.

Risk
Something that could potentially contribute to failing to reach a milestone.

STM
See Ship to Manufacturing.

Storage Tests
Test how memory and space is used by the program, either in resident memory or on disk.

Stress Test
Tests the program's response to peak activity conditions.

Syncopated Test
A test that works in harmony with other tests. The timing is such that both tests work together, but yet independently.

Test Case
A breakdown of each functional area into an individual test. These can be automated or done manually.

Test Phase
Phase of the PLC where the entire product is tested, both internally and externally. Alpha and
Beta Tests occur during this phase.

Test Plan
A specific plan that breakdown testing approaches on a functional area basis.

Test Suite

A set of test cases.

Usability

The degree to which the intended target users can accomplish their intended goals.

Volume Tests
Test the largest tasks a program can deal with.

White Box Test
It is used to test areas that cannot be reached from a black box level. (Sometimes called Glass
Box testing).

Zero Bug Build
Phase of the PLC where the product has stabilized in terms of bugs found and fixed. Development is fixing bugs as fast as they are found, the net resulting in zero bugs on a daily basis. This is usually determined when after a few builds have passed. This is the preliminary stage before Green Master.
 




Post Resume: Click here to Upload your Resume & Apply for Jobs

IP IP Logged
Post Reply Post New Topic
Printable version Printable version

Forum Jump
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot delete your posts in this forum
You cannot edit your posts in this forum
You cannot create polls in this forum
You cannot vote in polls in this forum



This page was generated in 0.172 seconds.
Vyom is an ISO 9001:2000 Certified Organization

© Vyom Technosoft Pvt. Ltd. All Rights Reserved.

Privacy Policy | Terms and Conditions
Job Interview Questions | Placement Papers | Free SMS | Freshers Jobs | MBA Forum | Learn SAP | Web Hosting