7

Can somebody please explain the major differences between covariance pattern models (Hedeker and Gibbons, Chapter 6, 2006; Jennrich and Schluchter 1986) and generalized estimating equation models (Hardin and Hilbe, 2012; Liang and Zeger, 1986).

My presumption is that the main difference is around ML/REML estimation in the covariance pattern models case; versus, quasi likelihood estimation in the GEE case. Is this correct?

And also, the applicability of covariance pattern models to Gaussian response data whereas, GEE is more applicable to response data of many other distributions (Gaussian, Binary, Binomial, Poisson, etc).

Are covariance pattern models with Gaussian response and some identity link and GEE (identity link, Gaussian response) similar/identical?

Chris
  • 973

1 Answers1

5

Actually you have correctly listed the major differences between the covariance pattern models and GEE models. One thing I would like to add is that, for the Section 6.2.5 "Random Effects Structure" of Hedeker and Gibbons (2006), the two models would be characterized as subject-specific (conditional) models and population average (marginal) models respectively, though the two coincide for linear cases. See my answer here: What is a difference between random effects-, fixed effects- and marginal model?

I would say the two are numerically equivalent, though they use different estimation methods. See the example in Stata below. Note that the covariance pattern models can be fitted with command mixed, but the random effects are suppressed by the option noconstant. Of course, we can turn to REML instead of ML to obtain unbiased variance estimates.

. webuse pig

. mixed weight week || id:, noconstant residuals(exchangeable)

Mixed-effects ML regression Number of obs = 432 Group variable: id Number of groups = 48

                                            Obs per group: min =         9
                                                           avg =       9.0
                                                           max =         9

                                            Wald chi2(1)       =  25337.48

Log likelihood = -1014.9268 Prob > chi2 = 0.0000


  weight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]

-------------+---------------------------------------------------------------- week | 6.209896 .0390124 159.18 0.000 6.133433 6.286359 _cons | 19.35561 .5974056 32.40 0.000 18.18472 20.52651


. xtset id week

. xtgee weight week, corr(exchangeable)

GEE population-averaged model Number of obs = 432 Group variable: id Number of groups = 48 Link: identity Obs per group: min = 9 Family: Gaussian avg = 9.0 Correlation: exchangeable max = 9 Wald chi2(1) = 25337.48 Scale parameter: 19.20076 Prob > chi2 = 0.0000


  weight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]

-------------+---------------------------------------------------------------- week | 6.209896 .0390124 159.18 0.000 6.133433 6.286359 _cons | 19.35561 .5974055 32.40 0.000 18.18472 20.52651


But in GEE, we often use robust (empirically) standard errors instead of model-based standard errors. When we add the option robust, only the standard errors change.

. xtgee weight week, corr(exchangeable) robust

GEE population-averaged model Number of obs = 432 Group variable: id Number of groups = 48 Link: identity Obs per group: min = 9 Family: Gaussian avg = 9.0 Correlation: exchangeable max = 9 Wald chi2(1) = 4552.32 Scale parameter: 19.20076 Prob > chi2 = 0.0000

                                 (Std. Err. adjusted for clustering on id)

         |               Robust
  weight |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]

-------------+---------------------------------------------------------------- week | 6.209896 .0920382 67.47 0.000 6.029504 6.390287 _cons | 19.35561 .4038676 47.93 0.000 18.56405 20.14718


Randel
  • 6,711