This question was inspired by this discussion I read recently. After obtaining our results, the assumptions from the models we used should be checked, otherwise these results may be deceiving.
Unfortunately, this important step is often skipped in practice in my field (health sciences), as I see it mostly because
i) checking model assumptions can be very time-consuming,
ii) it can make interpreting the results more difficult for the layman,
iii) it makes the results seem less attractive (e.g., if p-values are reduced after updating the statistical methodology in response to the violation of a model assumption, and
iv) analysts don’t know the assumptions, how to check them or what to do when they’re violated.
Whilst the consequences of the violation of these assumptions are not trivial, my question is how much would the literature actually benefit from every analyst verifying every assumption? On the one hand, this would very probably improve the overall quality of the literature, such as reducing false positive and false negative findings and the occurrence of inflated coefficients and deceptively small standard errors and p-values.
On the other hand, the slower time taken to perform rigorous statistical analysis would greatly lower the publication rate. If we assume that the false findings described above are non-systematic due to the use of different data sets and methodologies, then I can foresee how it could be beneficial to have a higher publication rate – even if the quality of the results is lower.
Is there any truth to this?