Active TopicsActive Topics  Display List of Forum MembersMemberlist  CalendarCalendar  Search The ForumSearch  HelpHelp
  RegisterRegister  LoginLogin


 One Stop Testing ForumQuality Assurance @ OneStopTestingRequirements and Design Documents @ OneStopTesting

Message Icon Topic: Rapid Software Testing

Post Reply Post New Topic
Author Message
merry
Newbie
Newbie
Avatar

Joined: 05Feb2007
Online Status: Offline
Posts: 15
Quote merry Replybullet Topic: Rapid Software Testing
    Posted: 15Feb2007 at 12:39pm

Rapid Software Testing

Over the past two decades, computer systems and the software that runs them have made their way into all aspects of life. Software is present in our cars, ovens, cell phones, games, and workplaces. It drives billing systems, communications systems, and Internet connections. The proliferation of software systems has reached the point that corporate and national economies are increasingly dependent on the successful development and delivery of software.

As the stakes grow higher in the software marketplace, pressure grows to develop more products at a faster pace. This places increasing demands on software developers and on software testers not only to produce faster, but also to make products that are of good enough quality that the customer will be satisfied with them.

There are therefore two major demands placed on today's software test engineer:

  • We need to test quickly to meet aggressive product delivery schedules.

  • We need to test well enough that damaging defects don't escape to our customers.

The challenge is to satisfy each of these needs without sacrificing the other. The purpose of this book is to define an efficient test process and to present practical techniques that satisfy both demands. We begin by examining the fundamentals of software development and software testing.

Basic Definitions for Software Testing

Before launching into a discussion of the software development process, let's define some basic terms and concepts. The logical place to start is with software testing.

Software testing is a process of analyzing or operating software for the purpose of finding bugs.

Simple as this definition is, it contains a few points that are worth elaboration. The word process is used to emphasize that testing involves planned, orderly activities. This point is important if we're concerned with rapid development, as a well thought-out, systematic approach is likely to find bugs faster than poorly planned testing done in a rush.

According to the definition, testing can involve either "analyzing" or "operating" software. Test activities that are associated with analyzing the products of software development are called static testing. Static testing includes code inspections, walkthroughs, and desk checks. In contrast, test activities that involve operating the software are called dynamic testing. Static and dynamic testing complement one another, and each type has a unique approach to detecting bugs.

The final point to consider in the definition of software testing is the meaning of "bugs." In simple terms, a bug is a flaw in the development of the software that causes a discrepancy between the expected result of an operation and the actual result. The bug could be a coding problem, a problem in the requirements or the design, or it could be a configuration or data problem. It could also be something that is at variance with the customer's expectation, which may or may not be in the product specifications.

THE LIFE OF A BUG

The life of a software bug may be described as follows. A bug is born when a person makes an error in some activity that relates to software development, such as defining a requirement, designing a program, or writing code. This error gets embedded in that person's work product (requirement document, design document, or code) as a fault.

As long as this fault (also known as a bug or defect) remains in the work product, it can give rise to other bugs. For example, if a fault in a requirements document goes undetected, it is likely to lead to related bugs in the system design, program design, code, and even in the user documentation.

A bug can go undetected until a failure occurs, which is when a user or tester perceives that the system is not delivering the expected service. In the system test phase, the goal of the test engineer is to induce failures through testing and thereby uncover and document the associated bugs so they can be removed from the system. Ideally the life of a bug ends when it is uncovered in static or dynamic testing and fixed.

One practical consequence of the definition of testing is that test engineers and development engineers need to take fundamentally different approaches to their jobs. The goal of the developer is to create bug-free code that satisfies the software design and meets the customer's requirements. The developer is trying to "make" code. The goal of the tester is to analyze or operate the code to expose the bugs that are latent in the code as it is integrated, configured, and run in different environments. The tester is trying to "break" the code. In this context, a good result of a software test for a developer is a pass, but for that same test a successful outcome for the test engineer is a fail. Ultimately, of course, both the developer and tester want the same thing: a product that works well enough to satisfy their customers.

There are two basic functions of software testing: one is verification and the other is validation. Schulmeyer and Mackenzie (2000) define verification and validation (V&V) as follows:

Verification is the assurance that the products of a particular phase in the development process are consistent with the requirements of that phase and the preceding phase.

Validation is the assurance that the final product satisfies the system requirements.

The purpose of validation is to ensure that the system has implemented all requirements, so that each function can be traced back to a particular customer requirement. In other words, validation makes sure that the right product is being built.

Verification is focused more on the activities of a particular phase of the development process. For example, one of the purposes of system testing is to give assurance that the system design is consistent with the requirements that were used as an input to the system design phase. Unit and integration testing can be used to verify that the program design is consistent with the system design. In simple terms, verification makes sure that the product is being built right.

One additional concept that needs to be defined is quality. Like beauty, quality is subjective and can be difficult to define. We will define software quality in terms of three factors: failures in the field, reliability, and customer satisfaction. A software product is said to have good quality if:
  • It has few failures when used by the customer, indicating that few bugs have escaped to the field.

  • It is reliable, meaning that it seldom crashes or demonstrates unexpected behavior when used in the customer environment.

  • It satisfies a majority of users.

One implication of this definition of quality is that the test group must not only take measures to prevent and detect defects during product development, but also needs to be concerned with the reliability and usability of the product.

What is Rapid Testing?

Rapid development means different things to different people. To some people, it's rapid prototyping. To others, it's a combination of CASE tools, intensive user involvement, and tight time boxes. Rather than identify rapid development with a specific tool or method, here is one definition:

Rapid development is a generic term that means the same thing as "speedy development" or "shorter schedules." It means developing software faster than you do now. A "rapid development project," then, is any project that needs to emphasize development speed.

In a similar vein, rapid testing means testing software faster than you do now, while maintaining or improving your standards of quality. Unfortunately, there is no simple way to achieve rapid testing. It can represents rapid testing as a structure that is built on a foundation of four components. If any of these components is weak, the effectiveness of testing will be greatly impaired. The four components that must be optimized for rapid testing are people, integrated test process, static testing, and dynamic testing. We'll briefly examine each of the four components.

Essential components of rapid testing.

People

As every test manager knows, the right people are an essential ingredient to rapid testing. There are several studies that show productivity differences of 10:1 or more in software developers. The same is true with test engineers—not everyone has the skills, experience, or temperament to be a good test engineer. Rapid testing particularly needs people who are disciplined, flexible, who can handle the pressure of an aggressive schedule, and who are able to be productive contributors through the early phases of the development life cycle.

Integrated Test Process

No matter how good your people may be, if they do not have a systematic, disciplined process for testing, they will not operate at maximum efficiency. The test process needs to be based on sound, fundamental principles, and must be well integrated with the overall software development process. We will spend a good portion of Part I of this book describing ways to improve the test process, with a more detailed discussion of practical techniques and implementation tips presented in Part II. The focus of our discussion will be to explore ways of better integrating the development and test activities.

Static Testing

In the previous section we defined static testing as test activities associated with analyzing the products of software development. Static testing is done for the purpose of validating that a work product such as a design specification properly implements all the system requirements, and verifying the quality of the design. Static testing is one of the most effective means of catching bugs at an early stage of development, thereby saving substantial time and cost to the development. It involves inspections, walkthroughs, and peer reviews of designs, code, and other work products, as well as static analysis to uncover defects in syntax, data structure, and other code components. Static testing is basically anything that can be done to uncover defects without running the code. In the experience of the authors, it is an often-neglected tool. Static testing will be discussed throughout Parts I and II of this book.

Dynamic Testing

Often when engineers think of testing, they are thinking of dynamic testing, which involves operating the system with the purpose of finding bugs. Whereas static testing does not involve running the software, dynamic testing does. Generally speaking, dynamic testing consists of running a program and comparing its actual behavior to what is expected. If the actual behavior differs from the expected behavior, a defect has been found. Dynamic testing will be used to perform a variety of types of tests such as functional tests, performance tests, and stress tests. Dynamic testing lies at the heart of the software testing process, and if the planning, design, development, and execution of dynamic tests are not performed well, the testing process will be very inefficient. Dynamic testing is not only performed by the test team; it should be a part of the development team's unit and integration testing as well.

Developing a Rapid Testing Strategy

If you were to analyze your current software development process for ways to improve testing efficiency, where in the process would you look? Would you start by looking at the way you conduct test planning? At the means and contents of your automation? What about your defect tracking system?

Our approach in this book will be to look at every phase of the software process from the viewpoint of the test engineer to see if there is a way to speed up testing while maintaining or improving quality. The image we have in mind is one of you sitting in the "test engineer's swivel chair," turning to look at every aspect of the development process to see if you can prevent defects from escaping that phase of the process, or to see if you can extract information from that phase that will speed up your test effort.

Before we can take a detailed look at each phase of the software development process from the testing perspective, we need to lay some groundwork. In this chapter we define basic terms and concepts of software testing, and provide an overview of the software development process. Then we examine each phase of a typical development process to see how the efficiency and speed of testing can be improved. When examining each development phase, we bear the following questions in mind.

  • Is there any action that the test team can take during this phase that will prevent defects from escaping?

  • Is there any action that the test team can take during this phase that will help manage risk to the development schedule?

  • Is there any information that can be extracted from this phase that will allow the test team to speed up planning, test case development, or test execution?

If a test process is designed around the answers to these questions, both the speed of testing and the quality of the final product should be enhanced.

The Software Development Process

So far we have talked about examining the software development process without defining exactly what it entails. One reason for not being specific is that there is no one development process that is best suited for all rapid development projects. You might want to develop an embedded controller for a heart pacemaker as "rapidly as possible," but you are not likely to use the same process that a friend down the street is using to develop an online dictionary.

Shari Pfleeger (2001) defines a process as "a series of steps involving activities, constraints, and resources that produce an intended output of some kind." The following is adapted from Pfleeger's list of the attributes of a process:

  • The process prescribes all of the major process activities.

  • The process uses resources, subject to a set of constraints (such as a schedule), and provides intermediate and final products.

  • The process may be composed of subprocesses that are linked in some way. The process may be defined as a hierarchy of processes, organized so that each subprocess has its own process model.

  • Each process activity has entry and exit criteria, so that we know when the activity begins and ends.

  • The activities are organized in a sequence or in parallel to other independent subprocesses, so that it is clear when one activity is performed relative to the other activities.

  • Every process has a set of guiding principles that explains the goals of each activity.

  • Constraints or controls may apply to an activity, resource, or product. For example, the budget or schedule may constrain the length of time an activity may take or a tool may limit the way in which a resource may be used.

When a process relates to the building of a product, it is often called a life cycle. The development of a software product is therefore called a software life cycle. A software life cycle can be described in a variety of ways, but often a model is used that represents key features of the process with some combination of text and illustrations.

One of the first software life cycle models used was the waterfall. A main characteristic of the waterfall model is that each phase or component of the model is completed before the next stage begins. The process begins with a definition of the requirements for the system, and in the waterfall model the requirements are elicited, analyzed, and written into one or more requirements documents before any design work begins. The system design, program design, coding, and testing activities are all self-contained and thoroughly documented phases of the process. It should be noted that different names are commonly used for some of the phases; for example, the system design phase is often called "preliminary design," and the program design phase is often called "detailed design."

Waterfall life cycle model.

The waterfall model has a number of critics. One criticism questions the possibility of capturing all the requirements at the front-end of a project. Suppose you are asked to state all the requirements for a new car before it is designed and built. It would be impossible as a customer to conceive of all the detailed requirements that would be needed to design and build a car from scratch. Yet, this is the kind of demand that the waterfall model places upon customers and analysts at the front-end of a waterfall process.

Curtis, Krasner, Shen, and Iscoe (1987, cited in Pfleeger, 2000) state that the major shortcoming of the waterfall model is that it fails to treat software as a problem-solving process. The waterfall was adapted from the world of hardware development, and represents a manufacturing or assembly-line view of software development in which a component is developed and then replicated many times over. But software is a creation process rather than a manufacturing process. Software evolves iteratively as the understanding of a problem grows. There are a lot of back and forth activities that involve trying various things to determine what works best. It is not possible, in other words, to accurately model the software development process as a set of self-contained phases as the waterfall attempts to do. Other models such as the spiral, staged delivery and evolutionary prototype models are better suited to the iterative nature of software development.

If you have worked as a test engineer in a waterfall environment, you may have direct experience with another problem that is all too common with the waterfall. Unless certain precautions are taken, all the errors made in defining the requirements, designing the system, and writing the code flow downhill to the test organization. If the waterfall is used, the test team may find a lot of bugs near the end of development—bugs that have to be sent back upstream to be fixed in the requirements, design, or coding of the product. Going back upstream in a waterfall process is difficult, expensive, and time-consuming because all the work products that came from the supposedly "completed" phases now must be revised.

In spite of its problems, the waterfall model is worth understanding because it contains the basic components that are necessary for software development. Regardless of the model used, software development should begin with an understanding of what needs to be built; in other words, an elicitation and analysis of the requirements. The development process should include design, coding, and testing activities, whether they are done in a linear sequence as in the waterfall, or in an iterative sequence as in the evolutionary prototyping or staged delivery models. We use the waterfall as a context for discussing process improvement, but the basic principles of rapid testing should be applicable to whatever life cycle model is used.

A Waterfall Test Process

In the traditional waterfall model , the role of the test organization is not made explicit until the system testing and acceptance testing phases. Most of the activity of the earlier phases, such as design, coding, and unit testing, are associated primarily with the software development team. For this reason it is useful to derive a corresponding life cycle model

Table 1.1 Inputs and Outputs for the Waterfall Test Process

Activity

Inputs

Outputs

Requirements analysis

Requirements definition, requirements specification

Requirements traceability matrix

Test planning

Requirements specification, requirements trace matrix

Test plan—test strategy, test system, effort estimate and schedule

Test design

Requirements specification, requirements trace matrix, test plan

Test designs—test objectives, test input specification, test configurations

Test implementation

Software functional specification, requirements trace matrix, test plan, test designs

Test cases—test procedures and automated tests

Test debugging

"Early look" build of code, test cases, working test system

Final test cases

System testing

System test plan, requirements trace matrix, "test-ready" code build, final test cases, working test system

Test results—bug reports, test status reports, test results summary report

Acceptance testing

Acceptance test plan, requirements trace matrix, beta code build, acceptance test cases, working test system

Test results

Operations and maintenance

Repaired code, test cases to verify bugs, regression test cases, working test system

Verified bug fixes


Requirements Analysis

When analyzing software requirements, the goals of the test team and the development team are somewhat different. Both teams need a clear, unambiguous requirements specification as input to their jobs. The development team wants a complete set of requirements that can be used to generate a system functional specification, and that will allow them to design and code the software. The test team, on the other hand, needs a set of requirements that will allow them to write a test plan, develop test cases, and run their system and acceptance tests.

A very useful output of the requirement analysis phase for both development and test teams is a requirements traceability matrix. A requirements traceability matrix is a document that maps each requirement to other work products in the development process such as design components, software modules, test cases, and test results. It can be implemented in a spreadsheet, word processor table, database, or Web page.

Test Planning

By test planning we mean determining the scope, approach, resources, and schedule of the intended testing activities. Efficient testing requires a substantial investment in planning, and a willingness to revise the plan dynamically to account for changes in requirements, designs, or code as bugs are uncovered. It is important that all requirements be tested or, if the requirements have been prioritized, that the highest priority requirements are tested. The requirements traceability matrix is a useful tool in the test planning phase because it can be used to estimate the scope of testing needed to cover the essential requirements.

Ideally, test planning should take into account static as well as dynamic testing, but since the waterfall test process described in Table 1.1 is focused on dynamic testing, we'll exclude static testing for now. The activities of the test planning phase should prepare for the system test and acceptance test phases that come near the end of the waterfall, and should include:

  • Definition of what will be tested and the approach that will be used.

  • Mapping of tests to the requirements.

  • Definition of the entry and exit criteria for each phase of testing.

  • Assessment, by skill set and availability, of the people needed for the test effort.

  • Estimation of the time needed for the test effort.

  • Schedule of major milestones.

  • Definition of the test system (hardware and software) needed for testing.

  • Definition of the work products for each phase of testing.

  • An assessment of test-related risks and a plan for their mitigation.

Waterfall test process.

The work products or outputs that result from these activities can be combined in a test plan, which might consist of one or more documents.

Test Design, Implementation, and Debugging

Dynamic testing relies on running a defined set of operations on a software build and comparing the actual results to the expected results. If the expected results are obtained, the test counts as a pass; if anomalous behavior is observed, the test counts as a fail, but it may have succeeded in finding a bug. The defined set of operations that are run constitute a test case, and test cases need to be designed, written, and debugged before they can be used.

A test design consists of two components: test architecture and detailed test designs. The test architecture organizes the tests into groups such as functional tests, performance tests, security tests, and so on. It also describes the structure and naming conventions for a test repository. The detailed test designs describe the objective of each test, the equipment and data needed to conduct the test, the expected result for each test, and traces the test back to the requirement being validated by the test. There should be at least a one-to-one relationship between requirements and test designs.

Detailed test procedures can be developed from the test designs. The level of detail needed for a written test procedure depends on the skill and knowledge of the people that run the tests. There is a tradeoff between the time that it takes to write a detailed, step-by-step procedure, and the time that it takes for a person to learn to properly run the test. Even if the test is to be automated, it usually pays to spend time up front writing a detailed test procedure so that the automation engineer has an unambiguous statement of the automation task.

Once a test procedure is written, it needs to be tested against a build of the product software. Since this test is likely to be run against "buggy" code, some care will be needed when analyzing test failures to determine if the problem lies with the code or with the test.

System Test

A set of finished, debugged tests can be used in the next phase of the waterfall test process, system test. The purpose of system testing is to ensure that the software does what the customer expects it to do. There are two main types of system tests: function tests and performance tests.

Functional testing requires no knowledge of the internal workings of the software, but it does require knowledge of the system's functional requirements. It consists of a set of tests that determines if the system does what it is supposed to do from the user's perspective.

Once the basic functionality of a system is ensured, testing can turn to how well the system performs its functions. Performance testing consists of such things as stress tests, volume tests, timing tests, and recovery tests. Reliability, availability, and maintenance testing may also be included in performance testing.

In addition to function and performance tests, there are a variety of additional tests that may need to be performed during the system test phase; these include security tests, installability tests, compatibility tests, usability tests, and upgrade tests.

Acceptance Test

When system testing is completed, the product can be sent to users for acceptance testing. If the users are internal to the company, the testing is usually called alpha testing. If the users are customers who are willing to work with the product before it is finished, the testing is beta testing. Both alpha and beta tests are a form of pilot tests in which the system is installed on an experimental basis for the purpose of finding bugs.

Another form of acceptance test is a benchmark test in which the customer runs a predefined set of test cases that represent typical conditions under which the system is expected to perform when placed into service. The benchmark test may consist of test cases that are written and debugged by your test organization, but which the customer has reviewed and approved. When pilot and benchmark testing is complete, the customer should tell you which requirements are not satisfied or need to be changed in order to proceed to final testing.

The final type of acceptance test is the installation test, which involves installing a completed version of the product at user sites for the purpose of obtaining customer agreement that the product meets all requirements and is ready for delivery.

Maintenance

Maintenance of a product is an often challenging task for the development team and the test team. Maintenance for the developer consists of fixing bugs that are found during customer operation and adding enhancements to product functionality to meet evolving customer requirements. For the test organization, maintenance means verifying bug fixes, testing enhanced functionality, and running regression tests on new releases of the product to ensure that previously working functionality has not been broken by the new changes.

The basic principles of regression testing and bug verification apply well to these phases of the life cycle.

Tying Testing and Development Together

The previous few sections have described waterfall models for the software development process and for the test process. The two models have common starting and ending points, but the test and development teams are involved in separate and different activities all along the way. This section presents two models that tie the two sets of activities together.

In this model, also called the hinged waterfall, the analysis and design activities form the left side of the V. Coding is the lowest point of the V, and the testing activities form the right side. Maintenance activities are omitted for simplicity.

Parallel waterfall model.

Why software testing is such a demanding and difficult job. Even while working in the traditional waterfall environment, the test team must perform a range of activities, each of which is dependent on output from the development team. Good communication must be maintained between the development and test threads of the process, and each team needs to participate in some of the other team's verification activities. For example, the test team should participate in verifying the system design, and the development team should participate in verifying the test plan.



Edited by merry - 15Feb2007 at 4:26pm



Post Resume: Click here to Upload your Resume & Apply for Jobs

IP IP Logged
Post Reply Post New Topic
Printable version Printable version

Forum Jump
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot delete your posts in this forum
You cannot edit your posts in this forum
You cannot create polls in this forum
You cannot vote in polls in this forum



This page was generated in 0.250 seconds.
Vyom is an ISO 9001:2000 Certified Organization

© Vyom Technosoft Pvt. Ltd. All Rights Reserved.

Privacy Policy | Terms and Conditions
Job Interview Questions | Placement Papers | Free SMS | Freshers Jobs | MBA Forum | Learn SAP | Web Hosting