Suppose we have
$$y = b_1x_1 + b_2x_2 + b_3x_3 + e$$
as our regression model.
Setting a linear restriction, say $b_1 + b_2 + b_3 = 0$, allow us to rewrite the model as,
$$y = (b_1)(x_1 - x_3) + (b_2)(x_2 - x_3) + e$$
Given that such restrictions often improve out of sample performance of estimates (even when the restrictions are wrong), I was wondering if it may be connected to Stein's Paradox? Intuitively, it seems to me that $b_3$ is essentially 'shrunk' to zero with the restriction, and that somehow improves the estimates.
I was wondering if anyone could give a more theoretically robust explanation? Or if I am wrong here, point out where I am wrong.
There are many other such instances in econometric literature.
Thanks for the link. I have looked at that before posting this. But I thought it was a bit different. I mean intuitively they seem to be about the same thing, but I am wondering if there is a more solid proof or demonstration.
– shellsnail Mar 02 '15 at 13:41But this finding does exist in the literature. What is commonly observed is that a random/false restriction will improve out of sample predictions slightly and a theory-driven restriction tend improve it even further. I recall a paper showing this, just have to find it...
– shellsnail Mar 02 '15 at 13:58(i) if the restrictions are correct and δ 1⁄4 0, the pretest estimator has a smaller risk than the ML estimator β^ at the origin, δ 1⁄4 0, and the risk depends on the level of significance α and correspondingly the critical value of the test c;
(ii) as the hypothesis error δ or λ grows, the risk of the pretest estimator β^0 increases, obtains a maximum after exceeding the risk of the MLE, β^, and then monotonically decreases to approach σ2K, the risk of the MLE
– shellsnail Mar 03 '15 at 11:28