0

This is the regression result and it contains the omnibus tests which is quite significant. Multivariate Meta-Analysis Model (k = 158; method: REML)

Variance Components:

            estim    sqrt  nlvls  fixed            factor 
sigma^2.1  0.1465  0.3828     53     no           studyID 
sigma^2.2  0.1187  0.3446    153     no  studyID/substudy

Test for Residual Heterogeneity: QE(df = 154) = 32228.8456, p-val < .0001

Test of Moderators (coefficients 1:4): QM(df = 4) = 27.1413, p-val < .0001

Model Results:

              estimate      se     zval    pval    ci.lb    ci.ub      

vegTypeforest -0.3656 0.1616 -2.2625 0.0237 -0.6824 -0.0489 * vegTypeshrubland -0.1211 0.1506 -0.8039 0.4215 -0.4164 0.1742
vegTypegrassland -0.4076 0.0913 -4.4638 <.0001 -0.5866 -0.2286 *** vegTypesavannah -0.5128 0.4257 -1.2046 0.2283 -1.3472 0.3215


Signif. codes: 0 ‘*’ 0.001 ‘’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

This is pairwise comparisons Hypotheses:

1: vegTypeshrubland - vegTypegrassland = 0 
2:  vegTypeshrubland - vegTypesavannah = 0 
3:  vegTypegrassland - vegTypesavannah = 0 
4:    vegTypeforest - vegTypeshrubland = 0 
5:    vegTypeforest - vegTypegrassland = 0 
6:     vegTypeforest - vegTypesavannah = 0 

Results:

   estimate     se    zval   pval 
1:   0.2865 0.1762  1.6265 0.1039 
2:   0.3917 0.4516  0.8675 0.3857 
3:   0.1052 0.4354  0.2416 0.8091 
4:  -0.2445 0.2209 -1.1068 0.2684 
5:   0.0420 0.1856  0.2262 0.8211 
6:   0.1472 0.4553  0.3232 0.7465 

I understand, based on other threads, that in some cases overall anova results could be significant while pairwise is not. Is this result valid? Or is something wrong?

mdewey
  • 17,806
Dips
  • 1
  • 1
    There is nothing wrong, these are different hypotheses being tested. See for example also here. – PBulls Oct 18 '23 at 08:38
  • 1
    Welcome to SE! Could you introduce the problem a bit and explain the different outcomes you are getting? – Knarpie Oct 18 '23 at 08:42
  • I think these are acceptable results. However, this is the comment I received from advisor. "you can for example get ANOVA with a p value of p = 0.02 and then none of the posthoc tests are significant although some might be close. But what I have difficulty getting my mind around is the main test having a p of 0.0001 (so not exactly borderline) and none of the posthoc tests are even close (the closest is a p of 0.2397)." I am at a loss how to explain the above results are fine and valid. I am looking for a more detailed answer as to 'why' this could be happening. – Dips Oct 18 '23 at 08:50
  • There are many similar questions here (with answers), but it is difficult to find one answering your specific question here. An idea (I will see if I can write an answer ...). With one-way anova (the simplest example) and very many groups, the overall F-test gains power from the many groups, but many groups makes for very many pairwise comparisons, which makes life difficult for the post-hoc tests ... ? – kjetil b halvorsen Oct 29 '23 at 14:38

1 Answers1

2

You have removed the intercept (a fact which is not obvious since you did not show us the command you used). This means that the overall test has a different null from when you include the intercept. What you are testing is the hypothesis that $\beta_1 = \dots = \beta_4 = 0$ and they clearly are not. However your pair-wise tests are testing $\beta_1 = \beta_2$ and so on.

mdewey
  • 17,806
  • 1
    And just to add to this: This kind of question comes up so often that I have written up an entire tutorial on this on the metafor package website (which was used for the analysis above): https://www.metafor-project.org/doku.php/tips:models_with_or_without_intercept – Wolfgang Oct 30 '23 at 18:09