0

Can anyone explain me about how sample size selection happens which is generally used as $20\times (p+q)$? Here $p$ is the number of parameters in the final model and $q $ is the number of parameters that may have been examined but discarded along the way.

Any credible reference is appreciated.

User1865345
  • 8,202
  • Welcome to CV. Check if the edit I did is okay or not. – User1865345 Apr 06 '23 at 15:44
  • 3
    Could you provide some background or a source for this formula? It's an interesting but unfamiliar one, so having some context would help. – whuber Apr 06 '23 at 15:44
  • @User1865345 Thanks a lot – Ishanee Mahapatra Apr 06 '23 at 15:50
  • 1
    Let me be clear. You got the formula apparently from somewhere and yet you are seeking a reference. Perhaps a clarification is needed. – User1865345 Apr 06 '23 at 15:52
  • @User1865345 Yes I am seeking for reference for this formula. Please help me out if you can – Ishanee Mahapatra Apr 06 '23 at 15:55
  • @whuber I need a reference. A Journal paper or book reference is more appreciated – Ishanee Mahapatra Apr 06 '23 at 15:57
  • 2
    It seems to be a quote of a comment by Frank Harrell on https://stats.stackexchange.com/questions/78289/neural-network-modeling-sample-size – Henry Apr 06 '23 at 16:38
  • @Henry Yes I have seen this but I need a reference. Any help is really appreciated – Ishanee Mahapatra Apr 06 '23 at 16:43
  • 1
    This comment was not intended to be authoritative, I am sure: it was offered as general guidance for assessing whether your thinking about sample sizes might be hugely wrong. Please don't take it as a general rule of thumb or universally applicable. IMHO, its principal merit lies in directing your attention to the number of parameters "that may have been examined but [were] discarded along the way." – whuber Apr 06 '23 at 19:33

0 Answers0