0

For a given dataset, if a normality test introduces type I or type II errors, how reliable are the subsequent results based on parametric or non-parametric techniques? What should we do to minimize the cascaded errors?

Technically these questions seems related to the following discussion regarding whether it is worthy to do a normality testing:

Based on my current reading, these answers are less justified and rarely provide a full analysis of the question above (or did I miss anything?).

Can I argue with peer reviewers not to do a normality test and directly use non-parametric techniques to show significant results?

Changkun
  • 109
  • 1
    Your title and the body of your question seem quite different. Some of the related questions you list address the title, but possibly not the text in the body. Could you edit one or the other to clarify your question? – mkt Jul 18 '22 at 07:28
  • The links you include in your post seem to provide an answer to the title question (at the very least, clearly not always), so it's a bit hard to see the point of asking it if you have read answers at those links. – Glen_b Jul 18 '22 at 09:10
  • @mkt I updated the question body to show the relationship between the title question and the questions in the body. I hope that may clarify more. If not, please let me know. – Changkun Jul 18 '22 at 09:57
  • @Glen_b, I posted the question after reading all these links. The discussion seems diverged. Practically, I've always been asked by peer reviewers to conduct normality tests, whereas I deeply doubt these. Some posts in these links seem to suggest answers but rarely provide full analysis. It would be more useful to see more solid references. – Changkun Jul 18 '22 at 09:59
  • I didn't assert that there was no variation in opinions, but the title question is "is it always encouraged" and there's a clearly some people who are not encouraging it, so the answer is therefore 'no'. When you say "practically, I've always been asked by peer reviewers to conduct normality tests" - I don't doubt it, it's common practice in some areas. That is not germane to the point being made in my comment; the title question is already answered in the negative because demonstrably some people don't encourage it (& even offer clear reasons why) - so clearly "no, not always". – Glen_b Jul 18 '22 at 10:24
  • Perhaps you mean to ask something else, like "should it always be encouraged" but then I believe you'd be asking a duplicate; at least some of the answers at some of those linked questions could be given in response to that (variation in answers notwithstanding, I believe the place to give arguments for or against such a question would be in already existing threads). I encourage you to alter your title to ask a different question. Given your body text is at odds with your title, indicating that at least one of the two should change, this might help with the choice. – Glen_b Jul 18 '22 at 10:28
  • 1
    The article linked in one of the questions you mention shows quite a bit of a research on this subject (at the very least, in an independent t-test context). While preliminary normality tests are widely practiced, the article provides evidence that normality testing is pretty much irrelevant to the overall result. However, it does not qualify as harmful either: using the same procedure as the existing research did might be reasonable for consistency's sake. – dx2-66 Jul 18 '22 at 11:07

0 Answers0