1

I have the following problem: I want to estimate the following regression models: $$ Y_1 = \beta_1 + \beta_2X $$ $$ Y_2 = \beta_3 + \beta_4X $$

I.e., I have different dependent variables and the same independent variables. I want to test the following hypothesis: $$H1:\beta_2=0$$ $$H2:\beta_2=\beta_4$$

I have found several related questions on cross validated but after reading all the answers I am still unsure what to do.

A common answer to the question e.g., here is to estimate the models separately and then do the following test: $$Z=\frac{\beta_1-\beta_2}{\sqrt{(SE\beta_1)^2+(SE\beta_2)^2}}$$

This goes back to:

Clogg, C. C., Petkova, E., & Haritou, A. (1995). Statistical Methods for Comparing Regression Coefficients Between Models. American Journal of Sociology, 100(5), 1261–1293. http://www.jstor.org/stable/2782277

However the authors write in the paper that:

"It is very important to note that the logic involved here posits one outcome variable (Y); there are two competing models for this single outcome: the reduced model and the full model"

Hence, the authors do not seem to belief that this works for different dependent variables even though I found several instances on cross validated where this formula was provided as an answer to a question about different dependent variables.

Another solution that was mentioned e.g.,here was to pool the dependent variables and use a dummy that is one if a datapoint belongs to $Y_1$ and 0 if it belongs to $Y_2$ as well as an interaction term. However, it was pointed out that this assumes that the variance of the errors is the same for both $Y_1$ and $Y_2$ and I do not think I can assume this in my setup.

A final approach I found was to do a multivariate regression. However, I have fever datapoints for $Y_1$ than for $Y_2$ and if I understand it correctly, I could only include datapoints where both $Y_1$ and $Y_2$ are available in my MV regression and hence I would lose statistical power for the test of H1

Is there any way that allows me to use all my datapoints to test H1 and then test H2 separately?

umbal
  • 65
  • 4
  • You could fit the MV model using maximum likelihood. This would use all available data. Then you could do the same fit, but with each of the restricted models. Then perform likelihood ratio chi square tests. – BigBendRegion Oct 21 '22 at 20:26
  • Would a likelyhood ratio test not tell me which model is a better fit rather than if a coefficient changed significantly? – umbal Oct 24 '22 at 15:14
  • They are indeed significance tests, each asymptotically chi square with one df. Of course, you can also use the (penalized) maximized likelihoods as measures of model fit. – BigBendRegion Oct 24 '22 at 19:34
  • Maybe I misunderstand you, but I don’t see how this would work. How can I learn anything about whether a set of independent variables has a larger effect on one dependent variable compared to a different dependent variable by doing likelihood tests between the multivariate model with both dependent variables and models with just one dependent variable. I get how your approach would work if I wanted to do regressions with a single dependent variable and multiple independent variables but this is not what I want to do. – umbal Oct 25 '22 at 13:31
  • Use the bivariate normal likelihood for the complete pairs, and use the univariate likelihood for the cases where one or the other dv is missing. Of course, this method assumes that the data are missing at random. This is a commonly used method for handling missing values. – BigBendRegion Oct 26 '22 at 00:10

0 Answers0