Suppose you have an i.i.d. sample ${(_i , _{1,i} , _{2,i} ): = 1, ... , }$. You want to estimate the causal effect
of $_1$ on $$. You first run a regression $_i = _0 + _1_{1,i} + _i$ and get the following result:
where the numbers in parentheses are standard errors.
Now, suppose you worry about omitted variable bias (OVB), so you are considering whether to put $_{2,}$ into the regression. The only condition you can check is that whether $_1$ and $_2$ are correlated. Propose a test with the null $H_0: (_1_2) = 0$ based on regression. Describe what regression you would run and what the test procedure would be. Explain why the proposed test works.
I know that OVB occurs when omitting a regressor (putting the regressor in the error term $u_i$ instead of putting as a new regressor $X_2$) that can affect $Y$ or $X_1$ and $X_2$ (other omitted variable) are correlated. However, how does testing whether correlation = 0 helps to know whether this has OVB or not? I don't understand
** I encounter another question, which is how to test whether it is significant? How to obtain $SE((_1_2))$? I was thinking about obtaining t-statistics and compare with 1.96 since I want alpha = 0.05