I have a series of data sets that I've fit with non-linear models (same model with different parameters for each data set). I'm trying to model the residuals e so that we can simulate the results y. When I plot the residuals (predicted y_hat - observed y) vs the model predictions y it looks like there is constant variance.
I wanted a less subjective approach than just looking at the plots of the residuals, but the tests I've found are trying to compare across groups. There are no separate groups in this data set. I guess I could partition the data across different levels of y, but I was tried a different approach, and I'm interested for feedback.
Does it make sense to fit a linear model absolute value(e) = slope*y_hat + intercept and just check the p-value for the slope parameter? What if my residuals are non-normal?
absolute value(e) = slope*y_hat + interceptThe idea is that absolute value will give the quantity of error while ignoring direction.
– NomNomNomenclature Feb 08 '22 at 20:58abs(e)orlog(abs(e))) is inappropriate and that's it's about magnitude and not detectability. I would think a p-value is useful since it can indicate when a trend of meaningful magnitude is unlikely to be a random manifestation in the data. Perhaps it would be best to look at a p-value and evaluate whether the effect size is large enough to matter? If both conditions are satisfied (low p-value, high effect) then proceed to address heteroscedasticity. – NomNomNomenclature Feb 09 '22 at 17:20