For a given dataset, if a normality test introduces type I or type II errors, how reliable are the subsequent results based on parametric or non-parametric techniques? What should we do to minimize the cascaded errors?
Technically these questions seems related to the following discussion regarding whether it is worthy to do a normality testing:
- How often does one see normally distributed data, and why use parametric tests if they are rare
- Why use parametric test at all if non parametric tests are 'less strict'
- When, if ever, should a normality test be performed on real-world data?
- Is normality testing 'essentially useless'?
Based on my current reading, these answers are less justified and rarely provide a full analysis of the question above (or did I miss anything?).
Can I argue with peer reviewers not to do a normality test and directly use non-parametric techniques to show significant results?