2

I am using metabolite data generated from LC-MS. I am making comparisons between two groups at a time, and need to account for Type 1 error and multiple comparisons, so I have utilized Bonferroni correction method. I am in R, the formula I have used is as follows ( where pvalues vector was a list of p values for the different groups I am making comparisons between using the Mann Whitney test).

   pvaluesadjust <- p.adjust(pvaluesvector, method="bonferroni")

Do my original p values have to be less than the p adjusted values calculated via the formula above in order to be deemed statistically significant?

From what I have read this is how I have understood it. A definition I found was adjusted P value is the smallest familywise significance level at which a particular comparison will be declared statistically significant as part of the multiple comparison testing.

For example, comparing disease vs control, the p value was 1.78e-105, the p adjusted value was 1.07e-104. Therefore as my p value is less than the p adjusted value is this significant statistically?

With another comparison between disease 2 vs control, the p value was 0.106807 and the p adjusted value is 0.6408. However, assuming alpha was initially set to 0.05, this comparison would not be statistically significant.

How can I use the p adjusted values to determine which comparisons are significant?

Edit: As I am using metabolite data an alpha of 0.05 is too large ( when my p values come up quite small). I understand now that the p.adjust (Bonferroni)in R is multiplying the p value by the number of comparisons.

Are there any other ways in R to carry out Bonferroni through dividing the alpha value by the number of comparisons? I would prefer this, as I am assuming given my small p values, that I will now arbitrarily have to set alpha as a smaller value than 0.05?

Thank-you

  • You compare the adjusted p-values to your $\alpha$ level. $\alpha$ is the number that tends to be $0.05$. You don’t even need to look at the original p-values. In fact, you can set up your R code to make the p-values inputs to p.adjust without ever having to print them. – Dave Mar 22 '20 at 19:03
  • @Dave Thank-you for your reply. I seem to be getting confused from what I have read, where the terms alpha and p adjusted value are used interchangeably. I thought Bonferroni correction adjusted the statistical threshold ( alpha divided by the number of analyses performed on the dependent variable)? https://www.statisticssolutions.com/bonferroni-correction/ I have read elsewhere that the p adjusted value is the smallest familywise significance level at which a particular comparison will be declared statistically significant. – user2259 Mar 22 '20 at 19:36
  • @Dave In addition, as there are large numbers involved an alpha level of 0.05 is too large, when my p values are very small ( meaning most results are deemed significant). I understand I can change the significance threshold to say 0.0001. However, please can you explain how the new adjusted p value is calculated using the bonferroni method here? – user2259 Mar 22 '20 at 19:41
  • Why are you changing from $\alpha=0.05$? Also, $\alpha$ and (adjusted) p-value are most certainly not interchangeable. Where did you see them used interchangeably? – Dave Mar 22 '20 at 21:15

1 Answers1

1

The simple answer to your question is that you pick an $\alpha$, pick a method to correct for multiple testing (Bonferroni in your case), and then report the tests with adjusted p-values at it below $\alpha$ as giving significant results.

In more complicated forms of adjustments, you will not be in the situation where you can just divide $\alpha$ by the number of tests and compare that value to the test p-values, so the p.adjust function adjusts the p-values for you to compare to compare to an unadjusted $\alpha$.

I’ll add to this as we discuss in the comments to your question, but that first paragraph (sentence) tells you how to perform your tests.

Dave
  • 62,186