I have a question regarding how to evaluate agreement between an individual's rating, and that of a group of people (of which the individual was a part). The group score was achieved through consensus (i.e. they agreed on one score as a group). I was originally planning to use kappa to look at agreement between the two scores, but I am now questioning this approach. Any ideas on how I can evaluate either the difference or the agreement between the two scores? My main worry is independence.
The data looks something like this:
ID IndividualScore GroupScore
1 5 3
2 4 2
etc