2

Suppose your significance level is 0.05, and some coefficients are significant and others are not (i.e. some of the p-values are less than 0.05 and others are greater than 0.05). Does this give anything about their importance in relationship with the outcome variable? In other words, do non-significant variables indicate that they are not important?

Could I just divide the absolute value of each of the z-values by the absolute value of the sum of the z-values to get the relative importance of each of the variables?

2 Answers2

1

Just as with ordinary regression, it's possible for significant relationships to be unimportant (indeed this is a regular occurrence when the sample size is large, since raw effects may be so small as to be almost meaningless but still be detectably different from 0 quite often).

Similarly it's possible for relationships that don't attain significance to be potentially important. The problem is that the uncertainty is large enough that it may just be noise. This is a common occurrence when the sample size is small, since substantial raw effects -- ones large enough to be of practical interest -- may not be large compared to their standard error. You might have a big effect or you might have almost no effect and a lot of noise.

Glen_b
  • 282,281
0

Your strategy is a possibility. To prove my point, take a look at this answer:

Calculating relative importance of predictors in a poisson glm model

I quote:

Yet another approach would be to take the absolute values of, in your model, the Z-statistics, sum them up and then repercentage each abs parameter with that total. By ranking those relativized percentages, a viable heuristic for relative importance can be easily obtained.

I think that's exactly what you're asking. The popular caret package implements similar approach but for linear models.

Before you do that, remember to standardize your variables to the same scale.

SmallChess
  • 7,211