Here are two captures from different videos on youtube, one obtains $Var(\hat{\beta_1})$, the other obtains $Var(\hat{\beta_1}|X)$ but they yields the same equations when the second capture further comments that $Var(\hat{\beta_1}|X)\neq Var(\hat{\beta_1})$
Asked
Active
Viewed 1,064 times
1
-
See also https://stats.stackexchange.com/questions/183986/derivation-of-ols-variance – Christoph Hanck Jun 07 '22 at 10:49
1 Answers
0
In both videos, the speakers caution that $X$ is assumed fixed. When $X$ is assumed fixed, this means that there is no variation in $X$ and so $Var(\hat{\beta_1} | X) = Var(\hat{\beta_1})$. So in short, the answer to your question is "yes."
When the term "simple linear regression" is used, that assumption is normally invoked (whether implicitly or explicitly). The Wikipedia article on OLS provides a decent summary of this assumption: https://en.wikipedia.org/wiki/Ordinary_least_squares#Assumptions
bschneidr
- 442
-
Thank you for your reply. What about the second video also assert that$Var(\hat{\beta_1} | X) \neq Var(\hat{\beta_1})$? If $X$ is random, what is the $Var({\hat{\beta_1}})$? – LJNG Jun 07 '22 at 01:00

