0

I have a clinical dataset which has already common prognosis scores, like characteristic tumor markers, tumor classification, etc. However I developed a new score which is able to predict patients' prognosis. But how can I show that my score is superior to other already developed prognosis markers? I have read lots of publications with ROC-analysis and evaluation of c-indeces, ... . However, there seems to be no common consensus? Has anyone an idea how to tackle my problem?

1 Answers1

0

Frank Harrell discusses this in a blog post. He recommends against his c-index for comparing models because it

is a low-power procedure. This is because the c-index is a rank measure ... that does not sufficiently reward extreme predictions that are correct.

If you want to show that adding a new predictor improves results over a current standard model, the best way is to model data with the standard model, add in your new predictor, and compare the models directly (for example, with a likelihood-ratio test). The Adequacy Index that he describes in that post and in Section 9.8.4 of his book provides a measure of how much is lost if you don't include the new predictor.

If you are developing your new score on a particular data set and don't have an independent set against which to test it, this answer to a similar question suggests how to use bootstrapping for validation and calibration of your modeling approach while comparing against alternative models.

EdM
  • 92,183
  • 10
  • 92
  • 267