If you have a coefficient covariance matrix, you can perform Wald tests on the set of coefficients associated with each original predictor. Tests on multiple coefficients at once are sometimes called "chunk tests." The test is whether any of a set of coefficients differs from zero, taking the covariances among their estimates into account.
That's how the anova() function in Frank Harrell's rms package works. I believe that's also how the Anova() function in the car package works if you specify type = "II" and test.statistic = "Wald" in the function call.
If there's a glitch arising from the strata in the model or some problem with naming of variables, you could do this yourself by identifying the indices of each set of associated coefficients (e.g.: all coefficients for your spline; a main coefficient and all associated nonlinear and interaction terms) and doing the calculations with a simple program. The fundamental part of anova() in rms is in the first few lines of the code (type rms:::anova.rms at the R command prompt):
ava <- function(idx) {
chisq <- coef[idx] %*% solvet(cov[idx, idx], coef[idx],
tol = tol)
c(chisq, length(idx))
}
The idx values are the indices of the coefficients in question, the coef are the corresponding point estimates (in the original log-hazard scale), and cov is the coefficient covariance matrix. The solvet() function is a version of the standard solve() function, adapted in the Hmisc package to allow for adjusting the tolerance for singularity. The chisq is the chi-square statistic.
This is straightforward for an unpenalized model. Then the Wald test is on that chi-square statistic, based on length(idx) degrees of freedom. It also works with robust coefficient covariance matrices. Frank Harrell's rms package uses the difference between the chi-square statistics and the corresponding degrees of freedom (the expected value under the null hypothesis) as a convenient summary of how much each predictor (including nonlinear and interaction terms) contributes to the model.
You need to make some choices with a penalized model, for example if the model includes pspline() terms. As Therneau and Grambsch explain in Section 5.8, there are two approaches to estimating the coefficient covariance matrix, resulting in two separate reports of standard errors in the summary() of a coxph object, se(coef) and se2. The former is more conservative. The vcov() function on such a model returns the more conservative matrix from the var slot of the coxph object, which the survival package uses for significance tests. I think that the rms package effectively uses the var2 matrix, as proposed by Gray. Make a choice of which coefficient covariance matrix to use. You then need to use the effective degrees of freedom from the model corrected for the penalization, rather than the number of associated coefficients, in the chi-square test.
You can use the effective degrees of freedom for likelihood-ratio tests comparing nested models when such tests are appropriate. They are not appropriate when the model uses robust covariance matrices. Robust models use the same point estimates based on maximizing partial likelihood and then adjust the covariance matrix to account for things like clustering. Likelihood-based comparisons thus wouldn't take that extra uncertainty into account. If your model involves robust covariance matrices then you need to use Wald tests instead. As Harrell says in Section 9.6 of his RMS textbook, such "adjustments for heterogeneity and correlated observations are not available for score and LR statistics."
vcov()to the output of yourcoxphobject? That should be a square matrix of dimension equal to the number of coefficients that you estimated. For example, trydim(vcov(yourCOXPHmodel))). – EdM Sep 15 '22 at 20:36vcov()works and indeed returns what you suggested (including the individual spline terms of those variables withpsplines()too). – JED HK Sep 16 '22 at 08:21