4

I am trying to fit a propensity score weighted Cox regression model in R. However, one of the treatment groups has zero events, so I also need to use an adjustment method (Firth's penalized likelihood).

I have been using the coxphf package just fine to run unweighted models, but there is not an argument that allows for the inclusion of weights in the same was that the regular coxph function allows. Any guidance would be much appreciated.

Alexis
  • 29,850
  • Have you tried coxph() with a ridge() term for penalizing the troublesome coefficient? – EdM Aug 18 '22 at 15:31
  • I haven't! Would this still work if my only coefficient is the treatment variable? i.e. Surv(time, event) ~ ridge(treatment) – Jordan Hackett Aug 18 '22 at 15:38
  • Yes. A ridge() term in coxph() gives a result on the first example used in the coxphf help page. Pay attention to how much penalization is invoked as that sets the bias in the coefficient estimate, too. If treatment has >2 levels you probably should set scale=FALSE in the ridge() term. I haven't thought through how to get profile likelihood confidence intervals with a ridge() term. I'd be reluctant to base the entire analysis on the propensity-score weighting. Combining that with direct covariate adjustment can give a double-robust estimate. – EdM Aug 18 '22 at 16:56
  • On further thought, there probably is no need to do a penalized regression in this situation. Instead of using the troublesome Wald confidence intervals, calculate profile-likelihood intervals as illustrated on this page. One limit will be infinite, but that's OK if there are no events in one group. That will give an estimate for the other limit, without any penalization and associated bias. – EdM Aug 18 '22 at 17:27
  • Oh great, thank you so much for your assistance! @EdM – Jordan Hackett Aug 18 '22 at 18:18
  • 1
    I would bootstrap the whole thing and not worry about the profile likelihood confidence intervals, which will be incorrect with propensity score weights. The problem with covariate adjustment is that the estimate isn't doubly robust in this case because the estimand changes to a conditional rather than marginal one. If you can get the point estimate you need, bootstrap the whole process (including estimating the weights). – Noah Aug 18 '22 at 18:27
  • @Noah in this case there are no events in one treatment group, so there won't be reliable unpenalized point estimates of a cox regression coefficient to bootstrap. If one is willing to settle for the finite end of a CI, would it be valid to bootstrap the entire process and work (somehow) with their set of profile-likelihood CIs? Or is there some way to incorporate the uncertainty about the weights into the profile likelihood? – EdM Aug 19 '22 at 14:52
  • 1
    @EdM Ah, I see, so you're saying the best OP can do with no events in one group is a lower bound for a confidence interval. In that case I have no additional advice. There isn't a way to incorporate the uncertainty of the weights into a profile likelihood without manual coding (and I don't even known what that would look like), but even if the weights were fixed, one would need a robust-type covariance matrix for valid inference. That is, even in the simple case of weighted logistic regression, likelihood-based inference is invalid. – Noah Aug 19 '22 at 15:42
  • @Noah thanks, that clarifies a few things that had me confused. – EdM Aug 19 '22 at 16:06
  • Crossposted here: https://stackoverflow.com/questions/73405057/is-there-a-way-in-r-to-include-weights-in-a-cox-regression-with-firths-penalize – David Thiessen Aug 24 '22 at 19:31

1 Answers1

1

An advantage of the Firth penalization (of the determinant of the Fisher information matrix) provided by coxphf(), its ability to provide profile-likelihood confidence intervals, is lost with a weighted regression.

As Noah reminded us in comments, and as Therneau and Grambsch explain in Section 7.3, partial likelihood maximization with a weighted model treats each weight as the number of independent cases having the same covariate values and outcomes. As that's not the situation with propensity weights, profile-likelihood confidence intervals in your situation would be artificially small. Robust coefficient (co)variance estimates are required. You might find a way to incorporate a Firth penalty into the user-defined penalization allowed for in coxph() and deal with the matter that way.

If you want to use Firth penalization and can't find a way to incorporate that into a coxph() penalty, a work-around for a point estimate via coxphf() would be to expand your data set so that you have integer numbers of cases in proportion to their propensity weights. Noah's idea to bootstrap the entire process, including the propensity-weight estimates, could then provide a robust confidence interval (although I recommend that you explain how one limit could be infinite, given the lack of events under one treatment).

You also could consider ridge regression as an alternate penalization method, penalizing the sum of squares of the regression coefficients. A ridge() term in a coxph() model can do that directly, although you should think about how much penalty to impose. There might be some advantage in using the tools in the glmnet package for cross-validation of ridge models and penalty selection, although I'm not sure how well that would behave with no events under one treatment.

EdM
  • 92,183
  • 10
  • 92
  • 267