Active TopicsActive Topics  Display List of Forum MembersMemberlist  CalendarCalendar  Search The ForumSearch  HelpHelp
  RegisterRegister  LoginLogin


 One Stop Testing ForumQuality Assurance @ OneStopTestingEstimation and Planning

Message Icon Topic: Best way to software test estimation

Post Reply Post New Topic
Author Message
vidhya
Senior Member
Senior Member


Joined: 24Mar2007
Online Status: Offline
Posts: 114
Quote vidhya Replybullet Topic: Best way to software test estimation
    Posted: 25Mar2007 at 11:07pm
               
There is no simple answer for this. The 'best approach' is highly dependent on the particular organization and project and the experience of the personnel involved.

For example, given two software projects of similar complexity and size, the appropriate test effort for one project might be very large if it was for life-critical medical equipment software, but might be much smaller for the other project if it was for a low-cost computer game. A test estimation approach that only considered size and complexity might be appropriate for one project but not for the other.

Following are some approaches to consider.

Implicit Risk Context Approach:
A typical approach to test estimation is for a project manager or QA manager to implicitly use risk context, in combination with past personal experiences in the organization, to choose a level of resources to allocate to testing. In many organizations, the 'risk context' is assumed to be similar from one project to the next, so there is no explicit consideration of risk context. (Risk context might include factors such as the organization's typical software quality levels, the software's intended use, the experience level of developers and testers, etc.) This is essentially an intuitive guess based on experience.

Metrics-Based Approach:
A useful approach is to track past experience of an organization's various projects and the associated test effort that worked well for projects. Once there is a set of data covering characteristics for a reasonable number of projects, then this 'past experience' information can be used for future test project planning. (Determining and collecting useful project metrics over time can be an extremely difficult task.) For each particular new project, the 'expected' required test time can be adjusted based on whatever metrics or other information is available, such as function point count, number of external system interfaces, unit testing done by developers, risk levels of the project, etc. In the end, this is essentially 'judgement based on documented experience', and is not easy to do successfully.

Test Work Breakdown Approach:
Another common approach is to decompose the expected testing tasks into a collection of small tasks for which estimates can, at least in theory, be made with reasonable accuracy. This of course assumes that an accurate and predictable breakdown of testing tasks and their estimated effort is feasible. In many large projects, this is not the case. For example, if a large number of bugs are being found in a project, this will add to the time required for testing, retesting, bug analysis and reporting. It will also add to the time required for development, and if development schedules and efforts do not go as planned, this will further impact testing

         
There is no simple answer for this. The 'best approach' is highly dependent on the particular organization and project and the experience of the personnel involved.

For example, given two software projects of similar complexity and size, the appropriate test effort for one project might be very large if it was for life-critical medical equipment software, but might be much smaller for the other project if it was for a low-cost computer game. A test estimation approach that only considered size and complexity might be appropriate for one project but not for the other.

Following are some approaches to consider.

Implicit Risk Context Approach:
A typical approach to test estimation is for a project manager or QA manager to implicitly use risk context, in combination with past personal experiences in the organization, to choose a level of resources to allocate to testing. In many organizations, the 'risk context' is assumed to be similar from one project to the next, so there is no explicit consideration of risk context. (Risk context might include factors such as the organization's typical software quality levels, the software's intended use, the experience level of developers and testers, etc.) This is essentially an intuitive guess based on experience.

Metrics-Based Approach:
A useful approach is to track past experience of an organization's various projects and the associated test effort that worked well for projects. Once there is a set of data covering characteristics for a reasonable number of projects, then this 'past experience' information can be used for future test project planning. (Determining and collecting useful project metrics over time can be an extremely difficult task.) For each particular new project, the 'expected' required test time can be adjusted based on whatever metrics or other information is available, such as function point count, number of external system interfaces, unit testing done by developers, risk levels of the project, etc. In the end, this is essentially 'judgement based on documented experience', and is not easy to do successfully.

Test Work Breakdown Approach:
Another common approach is to decompose the expected testing tasks into a collection of small tasks for which estimates can, at least in theory, be made with reasonable accuracy. This of course assumes that an accurate and predictable breakdown of testing tasks and their estimated effort is feasible. In many large projects, this is not the case. For example, if a large number of bugs are being found in a project, this will add to the time required for testing, retesting, bug analysis and reporting. It will also add to the time required for development, and if development schedules and efforts do not go as planned, this will further impact testing

MBA Examination papers



Post Resume: Click here to Upload your Resume & Apply for Jobs

IP IP Logged
Post Reply Post New Topic
Printable version Printable version

Forum Jump
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot delete your posts in this forum
You cannot edit your posts in this forum
You cannot create polls in this forum
You cannot vote in polls in this forum



This page was generated in 0.047 seconds.
Vyom is an ISO 9001:2000 Certified Organization

© Vyom Technosoft Pvt. Ltd. All Rights Reserved.

Privacy Policy | Terms and Conditions
Job Interview Questions | Placement Papers | Free SMS | Freshers Jobs | MBA Forum | Learn SAP | Web Hosting