I will illustrate for models with one and two regressors, the generalization to four variables should be obvious.
library(lmtest)
y <- rnorm(100)
x1 <- rnorm(100)
x2 <- rnorm(100)
reg1 <- lm(y~x1)
reg2 <- lm(y~x1+x2)
waldtest(reg1,reg2)
So, load the lmtest package (the next three lines just create some example data), run the restricted regession reg1, the unrestricted one reg2 and let waldtest do the comparison for you.
For my random numbers, I get the following output:
Wald test
Model 1: y ~ x1
Model 2: y ~ x1 + x2
Res.Df Df F Pr(>F)
1 98
2 97 1 0.0178 0.8942
Thus, the null that $\beta_2=0$ cannot be rejected at $\alpha=0.05$, as the $p$-value Pr(>F) is way larger. This is not surprising in view of how I generated the data: there s no relationship between the x and y.
carwhich allows to do the test without setting up the restricted model by oneself, e.g. forb1 = b3 = 0:car::linearHypothesis(reg, c("x1", "x3"), c(0,0), test = "F")– Helix123 Jun 03 '17 at 07:12