6

So with regular frequentist statistics, I know that you can just regress on the first stage of two-stage least squares, get the predicted values, then plug them into the second stage. How does this work with Bayesian IVs? The predicted values for the first stage are going to not be point estimates, but will be posterior densities that incorporate uncertainty.

Rick726
  • 193
  • 1
  • 7

1 Answers1

4

Bayesians generally think in terms of models rather than estimators. The setup you've described is a simultaneous (linear) equations model. For instance, it could be $$ y_1 = \beta_{12} y_{2} + \beta_{11} x_{1} + u_{1} \\ y_2 = \beta_{21} x_{1} + \beta_{23} x_3 + u_{2} $$ where you'd probably call $y_{1}$ your regressand, $y_{2}$ your endogenous regressor in your "second stage," $x_3$ is your instrumental variable that is excluded from the "first stage," and $x_1$ is an otherwise exogenous regressor. You can rewrite this in matrix notation as $$ \begin{bmatrix} 1 & -\beta_{12} \\ 0 & 1 \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} + \begin{bmatrix} -\beta_{11} & 0 \\ -\beta_{21} & -\beta_{23} \end{bmatrix} \begin{bmatrix} x_1 \\ x_3 \end{bmatrix} = \begin{bmatrix} u_1 \\ u_2 \end{bmatrix}$$ or simply $\mathbf{B}\mathbf{y} + \mathbf{\Gamma}\mathbf{x} = \mathbf{u}$. Typically you would then choose a bivariate normal density for $\mathbf{u}$, say $\phi(\mathbf{u}|\mathbf{x})$, and form your likelihood function while minding the Jacobian for the change of variables $$ \phi(\mathbf{u}|\mathbf{x}) \, \mathrm{d} \mathbf{u} = \phi(\mathbf{B}\mathbf{y} + \mathbf{\Gamma}\mathbf{x} ) \left| \frac{\mathrm{d} \mathbf{u}}{\mathrm{d} \mathbf{y}} \right| \, \mathrm{d} \mathbf{y} $$ The rest is the usual Bayesian procedure for linear regression of choosing priors and applying Bayes' rule. There are much more details on the algebra and additional intricacies of chosing suitable priors in Dreze (1976) as well as Kleibergen & Zivot (2003), among others, but I hope this may serve as a start.

Durden
  • 1,171