0

I have a question regarding how to evaluate agreement between an individual's rating, and that of a group of people (of which the individual was a part). The group score was achieved through consensus (i.e. they agreed on one score as a group). I was originally planning to use kappa to look at agreement between the two scores, but I am now questioning this approach. Any ideas on how I can evaluate either the difference or the agreement between the two scores? My main worry is independence.

The data looks something like this:
ID IndividualScore GroupScore
1 5 3
2 4 2
etc

Nick Cox
  • 56,404
  • 8
  • 127
  • 185
JonasE
  • 1

1 Answers1

0

If the individual was a part of the group, then the scores are not independent. You should take his scores out of the group. Then for two independent raters:

If the score variable is: 1) dichotomous - use Cohen kappa (not your case, indeed), McNemar test for marginal homogenity

2) nominal categorical - use unweighted Cohen kappa, Stuart-Maxwell or Bhapkar test for marginal homogenity

3) ordered categorical - use polychoric correlation, Stuart-Maxwell or Bhapkar test for marginal homogenity

4) conitnuous, Likert, at least interval - use Pearson correlation, Stuart-Maxwell or Bhapkar test for marginal homogenity

For more reference, explore this site: http://www.john-uebersax.com/stat/agree.htm

  • Thanks for the reply! The problem is that I can't take an individual score out as the group agreed one on score as a group. I basically want to know whether the individual agreed with the score that group agreed on by consensus. – JonasE Feb 05 '14 at 12:52
  • Well, then you go with as it is. And you can test for bias by using t-test for dependent samples (if the scores are at least interval measured) – Germaniawerks Feb 05 '14 at 13:14