I am looking to compare the differences in linear regression slopes between two datasets and at first I used a two sample t-test. However, when I force the intercept through $0$ for both regressions (necessary for this work), the standard error of the slope coefficient shrinks by over an order of magnitude, and t-tests become almost trivial (because SE's are so small, t statistics become so large that everything, even difference between two slopes $\beta = 0.0202$, $\beta = 0.0203$ is significant).
I was wondering if anyone had any other statistical tests to compare regression slopes in the past that might work better? Also possibly an explanation why forcing the intercept changes the standard error of the slope coefficient so much.
For example, I ran linear regression through two samples with different conditions and got two slopes ($\beta = 0.0202 ± 0.0003$, $\beta = 0.0203 ± 0.0006$) but the difference in slopes has $t$ stat equal to $15.28$, with $df = 1939$, $p = 0$. This only becomes an issue when the intercept is forced through the origin.
Edit: The intercept was forced through 0 because I am doing this regression for several different y values (pollutants A,B,C,D) vs one x value (pollutant X), and in theory when X is 0, the different y values should also be 0. Also, because I want to compare the slopes of A vs X, B vs X, etc. I initially thought setting the intercept to a constant for all cases would allow for a more 'apples to apples' comparison.
