Active TopicsActive Topics  Display List of Forum MembersMemberlist  CalendarCalendar  Search The ForumSearch  HelpHelp
  RegisterRegister  LoginLogin


 One Stop Testing ForumSoftware Testing @ OneStopTestingBeginners @ OneStopTesting

Message Icon Topic: The Logic of Computer Adaptive Testing

Post Reply Post New Topic
Author Message
tanushree
Senior Member
Senior Member
Avatar

Joined: 04Apr2007
Online Status: Offline
Posts: 2160
Quote tanushree Replybullet Topic: The Logic of Computer Adaptive Testing
    Posted: 22Oct2007 at 4:04am
The Logic of Computer Adaptive Testing

    Computer adaptive testing can begin when an item bank exists with IRT item statistics available on all items, when a procedure has been selected for obtaining ability estimates based upon candidate item performance, and when there is an algorithm chosen for sequencing the set of test items to be administered to candidates.

    The CAT algorithm is usually an iterative process with the following steps

       1. All the items that have not yet been administered are evaluated to determine which will be the best one to administer next given the currently estimated ability level
       2. The "best" next item is administered and the examinee responds
       3. A new ability estimate is computed based on the responses to all of the administered items.
       4. Steps 1 through 3 are repeated until a stopping criterion is met.

    Several different methods can be used to compute the statistics needed in each of these three steps. Hambleton, Swaminathan, and Rogers (1991); Lord (1980); Wainer, Dorans, Flaughter, Green, Mislevey, Steinberg, Thissen (1990); and others have shown how this can be accomplished using Item Response Theory.

    Treating item parameters as givens, the ability estimate is the value of theta that best fits the model. When the examinee is given a sufficient number of items, the initial estimate of ability should not have a major effect on the final estimate of ability. The tailoring process will quickly result in the administration of reasonably targeted items. The stopping criterion could be time, number of items administered, change in ability estimate, content coverage, a precision indicator such as the standard error, or a combination of factors.

    Step 1 references selecting the "best" next item. Little information about an examinee's ability level is gained when the examinee responds to an item that is much too easy or much too hard. Rather one wants to administer an item whose difficulty is closely targeted to the examinee's ability. Furthermore, one wants to give an item that does a good job of discriminating between examinees whose ability levels are close to the target level.

    Using item response theory, we can quantify the amount of information provided by an item at a given ability level. Under the maximum information approach to CAT, the approach used in this tutorial, the "best" next item is the one that provides the most information (In practice constraints are incorporated in the selection process.) With IRT, maximum information can be quantified as the standardized slope of Pi() at . In other words

    where Pi() is the probability of a correct response to item i, P'i() is the first derivative of Pi(), and Ii() is the information function for item i.

    Thus, for Step 1, Ii() for each item can be evaluated using the current ability estimate. While maximizing information is perhaps the best known approach to selecting items, Kingsbury and Zara (1989) describe several alternative item selection procedures.

    In Step 3, a new ability estimate is computed. The approach used in this tutorial is a modification of the Newton-Raphson iterative method for solving equations, outlined in Lord (1980, p 181). The examination starts with an initial estimate of S, computes the probability of a correct response for each item using S, and then adjust the ability estimate to obtain improved agreement of the probabilities and the observed response vector. The process is repeated until the adjustment is extremely small. Thus:

    where

    The right hand side of the above equation is the adjustment. S+1 denotes the adjusted ability estimate. The denominator of the adjustment is the sum of the item information functions evaluated at S. When S is the maximum likelihood estimate of the examinee's ability, the sum of the item information functions is the test information function, Irr.

    The standard error associated with the ability estimate is calculated by first determining the amount of information the set of items administered to the candidate provides at the candidate's ability level--this is easily obtained by summing the values of the item information functions at the candidate's ability level to obtain the test information. Second, the test information is inserted in the formula below to obtain the standard error:

    Thus, the standard error for individuals can be obtained as a by product of computing an estimate of an examinees ability.





Post Resume: Click here to Upload your Resume & Apply for Jobs

IP IP Logged
Post Reply Post New Topic
Printable version Printable version

Forum Jump
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot delete your posts in this forum
You cannot edit your posts in this forum
You cannot create polls in this forum
You cannot vote in polls in this forum



This page was generated in 0.172 seconds.
Vyom is an ISO 9001:2000 Certified Organization

© Vyom Technosoft Pvt. Ltd. All Rights Reserved.

Privacy Policy | Terms and Conditions
Job Interview Questions | Placement Papers | Free SMS | Freshers Jobs | MBA Forum | Learn SAP | Web Hosting