2

I'm trying to figure out where the false detection rate correction in a genetics paper is coming from and failing. The paper in question is looking at a sequencing method where every sample is read multiple times giving, each time, either a positive or negative result. They're trying to threshold the number of positive required before a sample is accepted to give an overall false positive rate of 1%. It's using something very close to the Benjamini-Hochberg method but, for a reason I'm struggling to figure out they've used the total negatives rather than the total trials for the denominator.

Benjamini-Hochberg would be $p(k) \le \frac{k}{m}\alpha$ where $p(k)$ is the probability of getting $k$ successes in $m$ trials and $\alpha$ is the desired $q$-value.

This paper is using $p(k) \le \frac{k}{u}\alpha$ where $u$ is the number of negative trials.

Why would they be doing this? I can see that it is a more conservative approach but I can't find any theoretical justification for using this particular formulation.

The papers in question are these: http://www.ncbi.nlm.nih.gov/pubmed/20436463 and http://www.nature.com/nature/journal/v462/n7271/full/nature08514.html

0 Answers0