3

How do I test $H_0 : \beta_1\leq 2$ in R?

The data is as follows:

x<-c(1,2,3,3,4,5,5)
y<-c(3,7,5,8,11,14,12)
utobi
  • 11,726
sy_kang
  • 31
  • Please, write down the regression equation you have in mind. – utobi Oct 09 '22 at 06:01
  • Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking. – Community Oct 09 '22 at 07:02
  • 1
    https://stats.stackexchange.com/a/136602/919 answers this question. https://stats.stackexchange.com/questions/50447 is closely related. – whuber Oct 09 '22 at 12:59

2 Answers2

3

If the regression equation is the simplest one, e.g.

$$Y_i = \beta_0 + \beta_1x_i + \epsilon_i,$$

then to test $H_0:\beta_1\leq 2$ against $H_1:\beta_1>2$ you can do

x<-c(1,2,3,3,4,5,5)
y<-c(3,7,5,8,11,14,12)
n <- length(y)

summary(mod <- lm(y~x))

Call: lm(formula = y ~ x)

Residuals: 1 2 3 4 5 6 7 0.02128 1.57447 -2.87234 0.12766 0.68085 1.23404 -0.76596

Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.5319 1.5881 0.335 0.75127
x 2.4468 0.4454 5.494 0.00273 **


Signif. codes: 0 ‘*’ 0.001 ‘’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.632 on 5 degrees of freedom Multiple R-squared: 0.8579, Adjusted R-squared: 0.8294 F-statistic: 30.18 on 1 and 5 DF, p-value: 0.002729

t_obs <- (2.4468-2)/0.4454 # observed t statistic pt(t_obs, df=n-2, lower.tail = F) # p-value

utobi
  • 11,726
0

The general idea of t-testing a regression coefficient $\beta_k$ is that the following follows a t-distribution.

$$ \dfrac{ \hat\beta_k-\beta_{k0} }{ \widehat{SE}(\hat\beta_k) } $$

We typically want to test if the coefficient is nonzero, so we take $\beta_{k0}=0$, but nothing stops us from testing with $\beta_{k0}=2$. We then would do only a one-sided t-test for your exact null hypothesis, though a two-sided test could be performed for an alternative hypothesis of $\beta_k\ne2$.

In software, we get the observed coefficient and standard error from the regression summary. We then calculate the test statistic from the fraction above and calculate the one-side do-value as usual.

Here is an example of how you could implement this in R.

set.seed(2022)
N <- 51
k <- 1 # which variable to test
x <- rnorm(N)
y <- 2*x +rnorm(N)
L <- lm(y~x)
s <- summary(L)$coef
tstat <- (s[k+1,1] - 2)/s[k+1, 2]
dof <- summary(L)$df[2] # degrees of freedom
p.value <- 1 - pt(tstat, dof) 
Dave
  • 62,186