0

In most of the times in linear regression, the two problems of non-normality and hetroskedasticity are present both in the model.

However, the two problems could be (but not neccessarily) inter-related; if one is solved, the other could be solved automatically.

If I have a regression model, with non-normal errors (P-value of Shapiro Wilk is < .001), and hetroskedastic erros (P-value of White's test is < .001).

And I am interested to solve the problem of non-normality with bootstrapping, but am not sure whether the hetroskedasticity is solved automatically or not.

Recently, I have found bootstrap for White's test for hetroskedasticity, within the R-package "whitestrap."

If bootstrap of White's test results in a non-significant P-value (i.e., > .05), is this a good justification for us to run our regression analysis using bootstrapping in order to solve both problems?

Hussain
  • 151
  • 1
    Those low p-values could be misleading. While their tests might be right that the error terms do not have normal distributions or distributions with equal variances, the p-value tells you nothing about the extent to which those assumptions are violated. Depending on your sample size, it might just be that the tests are (correctly) identifying tiny deviations that have minimal impact on your work. – Dave Feb 14 '23 at 02:24
  • You're right. But just assume that p-values give a good indication of non-normality or hetroskedasticity (and they support the histogram / plots) – Hussain Feb 14 '23 at 02:57
  • Maybe look into pair bootstrap, look into https://stats.stackexchange.com/questions/604511/how-to-bootstrap-prediction-intervals-for-regression-models-with-non-iid-noise – kjetil b halvorsen Feb 18 '23 at 02:16
  • Why not perform some transformations on the data first? – Estimate the estimators Feb 19 '23 at 17:03
  • Transformation most of the times fail (from my experience) – Hussain Feb 20 '23 at 11:26

0 Answers0