How is a defect reported?
Once
the test cases are developed using the appropriate techniques, they are
executed which is when the bugs occur. It is very important that these
bugs be reported as soon as possible because, the earlier you report a
bug, the more time remains in the schedule to get it fixed. Simple
example is that you report a wrong functionality documented in the Help
file a few months before the product release, the chances that it will
be fixed are very high. If you report the same bug few hours before the
release, the odds are that it wont be fixed. The bug is still the same
though you report it few months or few hours before the release, but
what matters is the time.It is not just enough to
find the bugs; these should also be reported/communicated clearly and
efficiently, not to mention the number of people who will be reading
the defect.
Defect tracking tools
(also known as bug tracking tools, issue tracking
tools or problem trackers) greatly aid the testers in reporting and
tracking the bugs found in software applications. They provide a means
of consolidating a key element of project information in one place.
Project managers can then see which bugs have been fixed, which are
outstanding and how long it is taking to fix defects. Senior management
can use reports to understand the state of the development process.
How descriptive should your bug/defect report be?
You should provide enough detail while reporting the
bug keeping in mind the people who will use it – test lead, developer,
project manager, other testers, new testers assigned etc. This means
that the report you will write should be concise, straight and clear.
Following are the details your report should contain:
- Bug Title
- Bug identifier (number, ID, etc.)
- The application name or identifier and version
- The function, module, feature, object, screen, etc. where the bug occurred
- Environment (OS, Browser and its version)
- Bug Type or Category/Severity/Priority
o Bug Category: Security, Database, Functionality (Critical/General), UI
o Bug Severity: Severity with which the bug affects the application – Very High, High, Medium, Low, Very Low
o Bug Priority: Recommended priority to be given for a fix of this bug – P0, P1, P2, P3, P4, P5 (P0-Highest, P5-Lowest)
- Bug status (Open, Pending, Fixed, Closed, Re-Open)
- Test case name/number/identifier
- Bug description
- Steps to Reproduce
- Actual Result
- Tester Comments
What does the tester do when the defect is fixed?
Once the reported defect is fixed, the tester needs
to re-test to confirm the fix. This is usually done by executing the
possible scenarios where the bug can occur. Once retesting is
completed, the fix can be confirmed and the bug can be closed. This
marks the end of the bug life cycle.
8. Types of Test Reports
The
documents outlined in the IEEE Standard of Software Test Documentation
covers test planning, test specification, and test reporting.
Test reporting covers four document types:
1. A Test Item Transmittal Report identifies
the test items being transmitted for testing from the development to
the testing group in the event that a formal beginning of test
execution is desired Details
to be included in the report – Purpose, Outline, Transmittal-Report
Identifier, Transmitted Items, Location, Status, and Approvals.2. A Test Log is used by the test team to record what occurred during test execution
Details to be included in the report – Purpose, Outline, Test-Log
Identifier, Description, Activity and Event Entries, Execution
Description, Procedure Results, Environmental Information, Anomalous
Events, Incident-Report Identifiers
3. A Test Incident report describes any event that occurs during the test execution that requires further investigation
Details to be included in the report – Purpose, Outline, Test-Incident-Report Identifier, Summary, Impact
4. A test summary report summarizes the testing activities associated with one or more test-design specifications
Details to be included in the report – Purpose, Outline,
Test-Summary-Report Identifier, Summary, Variances, Comprehensiveness
Assessment, Summary of Results, Summary of Activities, and Approvals
9. Software Test Automation
Automating
testing is no different from a programmer using a coding language to
write programs to automate any manual process. One of the problems with
testing large systems is that it can go beyond the scope of small test
teams. Because only a small number of testers are available the
coverage and depth of testing provided are inadequate for the task at
hand. Expanding the test
team beyond a certain size also becomes problematic with increase in
work over head. Feasible way to avoid this without introducing a loss
of quality is through appropriate use of tools that can expand
individual’s capacity enormously while maintaining the focus (depth) of
testing upon the critical elements.Consider the following
factors that help determine the use of automated testing tools:
· Examine your current testing process and determine where it needs to be adjusted for using automated test tools.
· Be prepared to make changes in the current ways you perform testing.
· Involve people who will be using the tool to help design the automated testing process.
· Create a set of evaluation criteria for functions that you will want
to consider when using the automated test tool. These criteria may
include the following:
o Test repeatability
o Criticality/risk of applications
o Operational simplicity
o Ease of automation
o Level of documentation of the function (requirements, etc.)
· Examine your existing set of test cases and test scripts to see which ones are most applicable for test automation.
· Train people in basic test-planning skills.
Approaches to Automation
There are three broad options in Test Automation:
Full Manual
Reliance on manual testing
Responsive and flexible
Inconsistent
Low implementation cost
High repetitive cost
Required for automation
High skill requirements
Partial Automation
Redundancy possible but requires duplication of effort
Flexible
Consistent
Automates repetitive tasks and high return tasks
Full Automation
Reliance on automated testing
Relatively inflexible
Very consistent
High implementation cost
Economies of scale in repetition, regression etc
Low skill requirements
Fully manual testing
has the benefit of being relatively cheap and effective. But as quality
of the product improves the additional cost for finding further bugs
becomes more expensive. Large scale manual testing also implies large
scale testing teams with the related costs of space, overhead and
infrastructure. Manual testing is also far more responsive and flexible
than automated testing but is prone to tester error through fatigue.
Fully automated testing
is very consistent and allows the repetitions of similar tests at very
little marginal cost. The setup and purchase costs of such automation
are very high however and maintenance can be equally expensive.
Automation is also relatively inflexible and requires rework in order
to adapt to changing requirements.
Partial Automation incorporates
automation only where the most benefits can be achieved. The advantage
is that it targets specifically the tasks for automation and thus
achieves the most benefit from them. It also retains a large component
of manual testing which maintains the test teams flexibility and offers
redundancy by backing up automation with manual testing. The
disadvantage is that it obviously does not provide as extensive
benefits as either extreme solution.
Choosing the right tool
· Take time to define the tool requirements in terms of technology, process, applications, people skills, and organization.
· During tool evaluation, prioritize which test types
are the most critical to your success and judge the candidate tools on
those criteria.
· Understand the tools and their trade-offs. You may
need to use a multi-tool solution to get higher levels of test-type
coverage. For example, you will need to combine the capture/play-back
tool with a load-test tool to cover your performance test cases.
· Involve potential users in the definition of tool requirements and evaluation criteria.
· Build an evaluation scorecard to compare each
tool’s performance against a common set of criteria. Rank the criteria
in terms of relative importance to the organization.
Top Ten Challenges of Software Test Automation
1. Buying the Wrong Tool
2. Inadequate Test Team Organization
3. Lack of Management Support
4. Incomplete Coverage of Test Types by the selected tool
5. Inadequate Tool Training
6. Difficulty using the tool
7. Lack of a Basic Test Process or Understanding of What to Test
8. Lack of Configuration Management Processes
9. Lack of Tool Compatibility and Interoperability
10. Lack of Tool Availability
10. Introduction To Software Standards
Capability Maturity Model -
Developed by the software community in 1986 with leadership from the
SEI. The CMM describes the principles and practices underlying software
process maturity. It is intended to help software organizations improve
the maturity of their software processes in terms of an evolutionary
path from ad hoc, chaotic processes to mature, disciplined software
processes. The focus is on identifying key process areas and the
exemplary practices that may comprise a disciplined software process.What makes up the CMM? The CMM is organized into five maturity levels:
· Initial
· Repeatable
· Defined
· Manageable
· Optimizing Except
for Level 1, each maturity level decomposes into several key process
areas that indicate the areas an organization should focus on to
improve its software process.Level 1 – Initial Level: Disciplined process, Standard, Consistent process, Predictable process, Continuously Improving process
Level 2 – Repeatable: Key practice areas – Requirements management,
Software project planning, Software project tracking & oversight,
Software subcontract management, Software quality assurance, Software
configuration management
Level 3 – Defined: Key practice areas – Organization process focus,
Organization process definition, Training program, Integrated software
management, Software product engineering, Intergroup coordination, Peer
reviews
Level 4 – Manageable: Key practice areas – Quantitative Process Management, Software Quality Management
Level 5 – Optimizing: Key practice areas – Defect prevention, Technology change management, Process change management
Six Sigma
Six Sigma is a quality management program to achieve “six sigma”
levels of quality. It was pioneered by Motorola in the mid-1980s and
has spread to many other manufacturing companies, notably General
Electric Corporation (GE).
Six Sigma is a rigorous and disciplined methodology that uses data
and statistical analysis to measure and improve a company’s operational
performance by identifying and eliminating “defects” from manufacturing
to transactional and from product to service. Commonly defined as 3.4
defects per million opportunities, Six Sigma can be defined and
understood at three distinct levels: metric, methodology and philosophy…
Training Sigma processes are executed by Six Sigma Green Belts and
Six Sigma Black Belts, and are overseen by Six Sigma Master Black Belts.
ISO
ISO – International Organization for Standardization is a network of
the national standards institutes of 150 countries, on the basis of one
member per country, with a Central Secretariat in Geneva, Switzerland,
that coordinates the system. ISO is a non-governmental organization.
ISO has developed over 13, 000 International Standards on a variety of
subjects.
11. Software Testing Certifications
Certification Information for Software QA and Test EngineersCSQE – ASQ (American
Society for Quality)’s program for CSQE (Certified Software Quality
Engineer) – information on requirements, outline of required ‘Body of
Knowledge’, listing of study references and more. CSQA/CSTE – QAI (Quality
Assurance Institute)’s program for CSQA (Certified Software Quality
Analyst) and CSTE (Certified Software Test Engineer) certifications.ISEB Software
Testing Certifications – The British Computer Society maintains a
program of 2 levels of certifications -ISEB Foundation Certificate,
Practitioner Certificate.
ISTQB Certified Tester -
The International Software Testing Qualifications Board is a part of
the European Organization for Quality – Software Group, based in
Germany. The certifications are based on experience, a training course
and test. Two levels are available: Foundation and Advanced.
12. Facts about Software Engineering
Following are some facts that can help you gain a better insight into the realities of Software Engineering.
1. The best programmers are up to 28 times better than the worst programmers.
2. New tools/techniques cause an initial LOSS of productivity/quality.
3. The answer to a feasibility study is almost always “yes”.
4. A May 2002 report prepared for the National Institute of Standards
and Technologies (NIST)(1) estimates the annual cost of software
defects in the United States as $59.5 billion.
5. Reusable components are three times as hard to build
6. For every 25% increase in problem complexity, there is a 100% increase in solution complexity.
7. 80% of software work is intellectual. A fair amount of it is creative. Little of it is clerical.
8. Requirements errors are the most expensive to fix during production.
9. Missing requirements are the hardest requirement errors to correct.
10. Error-removal is the most time-consuming phase of the life cycle.
11. Software is usually tested at best at the 55-60% (branch) coverage level.
12. 100% coverage is still far from enough.
13. Rigorous inspections can remove up to 90% of errors before the first test case is run.
14. Maintenance typically consumes 40-80% of software costs. It is probably the most important life cycle phase of software.
15. Enhancements represent roughly 60% of maintenance costs.
16. There is no single best approach to software error removal.
------------- http://www.quick2sms.com - Send Unlimited FREE SMS to Any Mobile Anywhere in INDIA,
Click Here
|