I am running a large number (potentially hundreds of millions - billions) null hypothesis significance tests (specifically a Poisson test, but the question is general). I would like to apply to them a correction for multiple hypothesis tests (e.g. false discovery rate). Due to computational considerations, I am running the tests in batches, each batch includes about 1 million tests. If I know in advance the total number of tests that I will be running, is it possible to perform the correction on the p values in each batch separately but taking into account the total number of tests?
I noted that the p.adjust function in R seems to offer such a possibility by setting the number of comparisons, but the documentation for this function also warns to "only set this (to non-default) when you know what you are doing!"
While I could of course calculate FDR after combining the batches, this might be an issue due to the size of the data.
Any insights will be greatly appreciated.