I'm trying to determine if different linear regressions are statistically different from one another. For that, I used the regression function for my different treatments (round, triangle and square in the figure) and then used the slope values (a) and error. So I did t-tests to see if treatments are significantly different using this set of values for each treatment: (a-error,a,a+error).
My problem is that, even though the linear regression is significative (p-value<0.05), I can't technically use it, as it is a parametric alternative and the application condition on my residus are not respected. For each of my treatments, I performed Durbin-Watson test, Goldfeld-Quandt (for linear regressions) and Shapiro-Wilks and I've got p-values<0.05 for some of them, sometimes all of them (depending on the treatment). I've tried all the transformations I could think off (log10, square root, double square root, square, inverse), and the problem remains. It doesn't seem there are outliers in my dataset either.
So here are my specific questions:
Is there another way I can make this work with linear regression? Can I still use that that function (my data set is small and it might explain why is the application conditions are not verified on the residues)?
What would be a good non parametric alternative? I've been exploring Spearman correlation, but even with the "conf.level = 0.95" command, I don't get an error so I can't perform the t-tests. I'm glad to know there is a great correlation between my quantitative variables, but that's something I already know. Is there a way I can compare Spearman correlations?
Is there a third option? I'm currently trying the Poisson regression, as my data is not continuous (there are clusters along the regressions, see figure) but it seems I get a lot of warnings when I apply it to my data and I'm not sure why.
I'm a beginner with both R and statistics, so thank you for your understanding and thank you for your help :)




orm()function in thermspackage in R. – EdM Apr 04 '22 at 18:14