0

Once more I rushed into running a promising experiment, without spending enough thought on how I would analyze the data. Now that I am finished, I have two conditions for which I'd like to compare accuracy scores and as it turns out, each of the conditions has a different chance level (0.33 vs. 0.66). Naturally, scores in the condition with a baseline of 0.66 are higher, however, I don't think a direct comparison is that trivial.

Does anyone have an idea of a good correction for these baseline differences?

Edit:

More information on my design:

I have to conditions (free vs. forced choice). Every subject did both. In total 80% of all trials belonged to free choice, and 20% to forced choice. The correct response for free choice could always be guessed with a probability of 66% and the one for forced choice with 33% likelihood. As DV I have reaction times and accuracies. In order to see whether the conditions are equally difficult (so I can compare reaction times), I need to compare accuracies.

Thanks.

userE
  • 208
  • How about a propensity score adjustment? – StatsStudent Jul 28 '15 at 15:18
  • Not sure whether this is really applicable to my problem here. I've never heard of it before, so forgive me, if I misunderstand something, but for which covariates would I calculate the propensity score? I already know with which likelihood a trial is assigned to a certain condition and therefore also the respective baseline level. Since the likelihoods are genuinely different, I expect the propensity scores also to be very different between conditions. How would the adjustment work, that you mentioned? – userE Jul 29 '15 at 07:55
  • So you already know the propensity score it sounds -- you know that one group has a 33.3% chance of receiving a condition, while the other has a 66.6% chance of being in that condition. I wasn't sure if the two groups were somehow determined based on some set of covariates (this is usually what happens). At any rate, you may still want to employ a propensity score approach -- this would allow you to either weight or adjust your data with the propensity score to offset the selection bias and correct for baseline differences -- this is a fairly standard approach in causal inference. – StatsStudent Jul 29 '15 at 20:10
  • Here are some good introductory papers for you that you will likely find helpful and directly applicable to your problem:
    1. http://www.tandfonline.com/doi/abs/10.1080/00273171.2011.568786#.Vbk0Ifn_XJM
    2. http://jea.sagepub.com/content/34/1/66.refs
    3. http://www.ncbi.nlm.nih.gov/pubmed/25773902
    – StatsStudent Jul 29 '15 at 20:16
  • Thanks for the papers, @StatsStudent. They helped a lot getting things sorted. However, I'm still not sure about how to tackle my problem with it. Mostly, I have trouble drawing the analogies between subjects, treatments and outcome in the typical clinical setting and my subjects, conditions and outcome. As far as I understand, my 2 conditions are comparable to a treatment, just as my accuracy score is the outcome. However, what is usually referred to as subject, has to be something else, because each of mine, did both conditions. I edited my question, to add information on my study design. – userE Jul 30 '15 at 11:16
  • Or is it as simple as weighting the accuracies per condition with the inverse of respective baseline? – userE Jul 30 '15 at 12:55
  • As I tried a few configurations which made some sense conceptually, but did not change the scores in a sensible way, I come to the conclusion that propensity adjustment is (most likely) not applicable to this case. Another issue is that the resulting scores, are hard to interpret, because they are inverse accuracies, for which no real reference exists (values can move to infinity). @StatsStudent, I appreciate the suggestion, though. – userE Aug 03 '15 at 11:44
  • I don't agree with your assessment, but then again, I don't know all the details of your project. The resulting propensity scores are straightforward in terms of interpretation. I'm not sure what you mean by the scores being "inverse accuracies." After adjusting, weighting, or stratifying on the estimated propensity scores, you would simply carry out your analyses in the manner you are used to and report the results as you normally would. Interpretation should not change -- the only thing you'd be doing with a propensity score analysis, is to reduce selection bias, which you clearly have. – StatsStudent Aug 04 '15 at 03:30
  • Sorry, inversed accuracies was pretty unclear. What I mean, is inversed baselines . But still, the smaller the baseline, the higher the factor by which I correct my outcome variable, without upper limit. As I mentioned in the edit, I want to check which of the conditions produced more errors, to check whether the effect in RT goes into the same direction, or whether a speed-acc-tradeoff is at hand. Unfortunately, the interpretation isn't that straightfoward to me, since the corrected outcome variables are even further apart from each other (acc: 0.7 vs. 0.9; corr. acc: 2.1 vs. 1.35) – userE Aug 04 '15 at 07:50

0 Answers0