4

I need your advice regarding a complicated design. I am testing data regarding new eye drops that suppose to decrease some quantitative measure. For every subject, one eye is randomly assigned with the drops, while the other is a control, and prior applying the drops, this measure (Y from now on) is measures on both eyes. From here I have calculated the difference, which had a mean near 0, with SD of 1.5, and the points looked unbiased around the mean.

Now the drops were applied, and the measure was taken again on both eyes, and the difference was calculated. Now it was far from 0; the treatment works. After some days, another measure was taken, showing that the effect is decreasing. And then there was another visit of measuring.

I want to find the time when the treatment isn't working anymore. How should I do it? I have for every subject something like 4 visits including the baseline, and I have the difference between the eyes on each visit. I can't simply say that below 1.5 SD still counts, or any other cutoff; I need some test. Which test / model should I use here? Thank you!

Nick Stauner
  • 12,342
  • 5
  • 52
  • 110
user41883
  • 41
  • 1
  • 1
    Logically, it isn't really possible to say that the difference is 0 at t weeks (to understand this it may help to read my answer here: Why do statisticians say you can't accept the null). Instead, you need to determine what difference is sufficiently small that it might as well be 0. In addition, you will need to use a mixed effects model for your data. – gung - Reinstate Monica Mar 13 '14 at 22:48
  • I'm not aware of any null hypothesis test that wouldn't apply some cutoff to the mean difference (e.g., probability of obtaining a sample like yours if the mean difference in the population is zero, 1.5 SD, or whatever). Even if you don't test a hypothesis and just model and compare the trends, I'm not sure how you'll decide when the treatment isn't working anymore without applying a cutoff of some sort, because is/not working sounds dichotomous to me. – Nick Stauner Mar 13 '14 at 22:50
  • Maybe you'd prefer to model change over consecutive days in the frequency of positive differences in the participants' treated eyes? This would just apply the cutoff within participants...presumably when the frequency of positive differences is (or when your trend model would predict it to be) no greater than the frequency of zero or negative differences, you could be certain it's not working anymore, and up until that natural cutoff point, you'd have some sense of the decreasing evidence that it's working. – Nick Stauner Mar 13 '14 at 22:50
  • I wasn't necessarily looking for a statistical test. I did try comparing the difference means at every time point (follow-up visit) using ANOVA with the Dunnett test comparing each time point to the baseline. I am not sure it's the right way to go. – user41883 Mar 14 '14 at 06:32
  • The point is, at time 0, the mean is near 0 (-0.15) with a s.d of 1.5, and the largest difference is -2 or even a bit more. It means that by random a difference of -2 (or 2) can occur. The big question is, how to choose wisely the cutoff. If by random I see D=-2 or D=2, should I be looking for at least 3 ? Or 4 (I mean per subject, if at time point t I see D=4 is means it's still working on that subject, and if it's D<4 it is not, for example)? The clinicians do not know, and the data analyst should decide, but how ? – user41883 Mar 14 '14 at 06:34

0 Answers0