1

For that regression:

$$ y =X_1 \beta_1 +X_2 \beta_2 + u $$ with $u \sim N(0,\sigma^2 I)$

and I want to derive Wald for the hypothesis $\beta_2 = 0$

as you know, for the regression of $ y = y =X \beta + u$ and hypothesis $ H_0:R\beta =r$

Wald is equal to $W$ = $ \left( R\widehat{\beta }-r\right) ^{T} \left( R \cdot I^{-1} \left( \widehat{\beta }\right)\cdot R^{T}\right) \left( R\widehat{\beta }-r\right) $

where "inverse of information matrix" $I^{-1} ( \widehat{\beta })$ $ = \sigma^2 (X^ T X) ^{-1} $

so Wald becomes:

$W$ = $ \dfrac{1}{\widehat {\sigma} ^2} \cdot \left( R\widehat{\beta }-r\right) ^{T} \left( R\left( X^{T}X\right) ^{-1}R ^T\right) ^{-1} \left( R\widehat{\beta }-r\right) $ has an asymptotically Chi-Square distribution $\chi_q^2$

but I want to derive this Wald in the form of $n \cdot \dfrac{SSR_R - SSR_U}{SSR_U} $ (with the help of LM test)

$SSR_R$ and $SSR_U$ stand for the restricted and the unrestricted.

because I want to build that relationship: $$ W = \dfrac{n \cdot k_2}{n-k} \cdot F $$ where F is the F-test in the form of $\dfrac{(SSR_R - SSR_U)/k_2}{SSR_U/(n-k)} $

How can I do this from the perspective of Maximum Likelihood Estimation? I really couldn't understand how I should use the restriction.

0 Answers0