It looks like R's cor.test returns p-values of exactly zero if the real p-value is very low. For example:
> sprintf('%e', cor.test(1:100, c(1.2, 1:98, 1.1), method='spearman')$p.value)
[1] "0.000000e+00"
In SciPy this same test results in a very low, but nonzero, p-value:
> print scipy.stats.spearmanr(range(100), [1.2]+range(98)+[1.1])
(0.94289828982898294, 1.3806191275561446e-48)
Presumably the p-value gets rounded down to 0 if the value becomes so small that R cannot represent it anymore using its normal floating point type? Is there a simple way to obtain the exact number or is the best I can do to report p < 2.2e-16?
cor.testwhere it states "pKendall and pSpearman in package SuppDists, spearman.test in package pspearman, which supply different (and often more accurate) approximations." LoadingSuppDists, extracting$estimatefrom yourcor.testresult, and passing it topSpearmangives an allegedly exact value. – whuber Jun 29 '15 at 22:20.Machine$double.eps. See that link for an explanation of why it makes little sense to discuss any notion of "exact" p values anywhere near that small anyway. It's a bit like arguing about how many angels can dance on the head of a pin, when you can only look at a different type of pin to the kind you want to discuss. The extreme tails depend heavily on assumptions like between-point independence. – Glen_b Jun 30 '15 at 01:09R, even though they agree to many d.p. Although these differences may be inconsequential, upon observing them we must immediately mistrust the output of both programs for all inputs until we understand the reason for the discrepancy. – whuber Jun 30 '15 at 13:23