I'm a little bit confused about this, so any help would be appreciated!
Let's say I have a repeated-measures design in which participants take part in a task where they have to rate the attractiveness of 50 different faces in 4 different conditions of facial expression type. Condition A is a baseline with neutral expression, condition B is happy, condition C sad and condition D angry.
I've run individual linear regressions with conditions B, C and D as the dependent variable and condition A as a predictor, generating 3 different regression equations. I would like to find a way to statistically compare these equations in terms of their slope and intercept.
Specifically, the theory I am trying to test predicts there will be no change in slope between them, but condition B should raise the intercept, condition D should lower it and there should be no change in C. In other words, the different conditions should result in a uniform shift up or down in terms of attractiveness.
Is there any way to test this statistically?
I understand that the significance of the d1 and d2 coefficients would indicate a difference between the models, but is there a way to unpack this difference? What I mean is that I would like to be able to report whether there is a significant difference between the models in slope and intercept separately as the theory I want to test predicts a change in intercept but not in slope. Is there a way to calculate a p value for change in slope, and a separate one for change in intercept?
– RarelySee Aug 27 '16 at 16:39