I need your advice regarding a complicated design. I am testing data regarding new eye drops that suppose to decrease some quantitative measure. For every subject, one eye is randomly assigned with the drops, while the other is a control, and prior applying the drops, this measure (Y from now on) is measures on both eyes. From here I have calculated the difference, which had a mean near 0, with SD of 1.5, and the points looked unbiased around the mean.
Now the drops were applied, and the measure was taken again on both eyes, and the difference was calculated. Now it was far from 0; the treatment works. After some days, another measure was taken, showing that the effect is decreasing. And then there was another visit of measuring.
I want to find the time when the treatment isn't working anymore. How should I do it? I have for every subject something like 4 visits including the baseline, and I have the difference between the eyes on each visit. I can't simply say that below 1.5 SD still counts, or any other cutoff; I need some test. Which test / model should I use here? Thank you!
is/not workingsounds dichotomous to me. – Nick Stauner Mar 13 '14 at 22:50