I am performing $N$ independent statistical tests with the same null hypothesis, and would like to combine the results into one $p$-value. It seems that there are two "accepted" methods: Fisher's method and Stouffer's method.
My question is about Stouffer's method. For each separate test I obtain a z-score $z_i$. Under a null hypothesis, each of them is distributed with a standard normal distribution, so the sum $\Sigma z_i$ follows a normal distribution with variance $N$. Therefore Stouffer's method suggests to compute $\Sigma z_i / \sqrt{N}$, which should be normally distributed with unit variance, and then use this as a joint z-score.
This is reasonable, but here is another approach that I came up with and that also sounds reasonable to me. As each of $z_i$ comes from a standard normal distribution, the sum of squares $S=\Sigma z^2_i$ should come from a chi-squared distribution with $N$ degrees of freedom. So one can compute $S$ and convert it to a $p$-value using cumulative chi-squared distribution function with $N$ degrees of freedom ($p=1−X_N(S)$, where $X_N$ is the CDF).
However, nowhere can I find this approach even mentioned. Is it ever used? Does it have a name? What would be advantages/disadvantages compared to Stouffer's method? Or is there a flaw in my reasoning?

I used the term "expected variance" to be what is sounds like, the variance you expect for sums of Z. From a frequentist standpoint, if you took sums of Z (of the same size) repeatedly, you would build a sampling distribution of sums of Z, and that sampling distribution would have a variance of "expected variance", i.e. N.
– russellpierce Aug 02 '13 at 13:04