Active Topics Memberlist Calendar Search Help | |
Register Login |
One Stop Testing Forum : Software Testing @ OneStopTesting : Beginners @ OneStopTesting |
Topic: Inter-Rater or Inter-Observer Reliability |
|
Author | Message |
tanushree
Senior Member Joined: 04Apr2007 Online Status: Offline Posts: 2160 |
Topic: Inter-Rater or Inter-Observer Reliability Posted: 08Oct2007 at 12:11am |
Inter-Rater or Inter-Observer Reliability
Whenever you use humans as a part of your measurement procedure, you have to worry about whether the results you get are reliable or consistent. People are notorious for their inconsistency. We are easily distractible. We get tired of doing repetitive tasks. We daydream. We misinterpret. So how do we determine whether two observers are being consistent in their observations? You probably should establish inter-rater reliability outside of the context of the measurement in your study. After all, if you use data from your study to establish reliability, and you find that reliability is low, you're kind of stuck. Probably it's best to do this as a side study or pilot study. And, if your study goes on for a long time, you may want to reestablish inter-rater reliability from time to time to assure that your raters aren't changing.
There are two major ways to actually estimate inter-rater reliability. If you measurement consists of categories -- the raters are checking off whichcategory each observation falls in -- you can calculate the percent of agreement between the raters. For instance, let's say you had 100 observations that were being rated by two raters. For each observation, the rater could check one of three categories. Imagine that on 86 of the 100 observations the raters checked the same category. In this case, the percent of agreement would be 86%. OK, it's a crude measure, but it does give an idea of how much agreement exists, and it works no matter how many categories are used for each observation. The other major way to estimate inter-rater reliability is appropriate
when the measure is a continuous one. There, all you need to do is
calculate the correlation between the ratings of the two observers. For
instance, they might be rating the overall level of activity in a
classroom on a 1-to-7 scale. You could have them give their rating at
regular time intervals (e.g., every 30 seconds). The correlation
between these ratings would give you an estimate of the reliability or
consistency between the raters. Post Resume: Click here to Upload your Resume & Apply for Jobs |
|
IP Logged | |
Forum Jump |
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot delete your posts in this forum You cannot edit your posts in this forum You cannot create polls in this forum You cannot vote in polls in this forum |
© Vyom Technosoft Pvt. Ltd. All Rights Reserved.