I have recently read some work that features hypothesis testing of individual regression coefficients when the overall regressions featuring those coefficients have $R^2_{adj}<0$. One example is Schmidt & Fahlenbrach (2017), granted, in regressions where the primary variables of interest (the ones whose tests I am skeptical to believe) are instrumental variables.
The hypothesis tests of the individual regression coefficients turn out significant with $p<0.05$, for what it is worth. However, the $R^2_{adj}<0$ is troubling. If we take $$R^2_{adj} = 1 - \left[\left(\dfrac{\overset{n}{\underset{i=1}{\sum}}\left( y_i - \hat y_i \right)^2}{n - p - 1}\right) \middle/ \left(\dfrac{\overset{n}{\underset{i=1}{\sum}}\left( y_i - \bar y \right)^2}{n-1}\right) \right]\text{,}$$ then $R^2_{adj}<0$ means that the fraction numerator exceeds the fraction denominator. That is, our (unbiased) estimate of the error variance is worse than our (unbiased) estimate of total variance. From this, I conclude that the model exhibits "anti"-performance, and we are worse-off for having done the modeling. How could I possibly believe any individual regression coefficient hypothesis test when the model performs so poorly that we not only lack much predictive ability (rather typical) but do a worse job of predicting than we would do if we did no modeling?
How believable are the hypothesis tests of individual regression coefficients when the overall regressions have $R^2_{adj}<0?$
(This seems related but not quite the same and containing a mixed-bag of responses, anyway.)
REFERENCE