I have made two linear regressions to estimate y and I get this results: One:
Residual standard error: 1.021 on 276 degrees of freedom
Multiple R-squared: 0.2347, Adjusted R-squared: 0.2059
F-statistic: 8.362 on 10 and 276 DF, p-value: 6.878e-12
Second:
Residual standard error: 1.025 on 273 degrees of freedom
Multiple R-squared: 0.2312, Adjusted R-squared: 0.1945
F-statistic: 6.314 on 13 and 273 DF, p-value: 2.085e-10
I know from $R^2$ that these models are not good, but which one is better from the other one? Can someone can explain the other factors beside R-squared? Maybe use ANOVA to compare?
Something reasonable is to take @linksys advice and test whether the coefficients on the additional regressors jointly are different from zero at a statistically significant level.
In general though, what's good/bad/better/worse etc... unfortunately involves a lot of context specific knowledge and can be more impressionistic art than hard science.
– Matthew Gunn Feb 05 '16 at 05:54