I want to build a model to predict the outcomes of experiments.
My predictive model gives out scores with an range 1 to 100 values.
I want to test if my predictive scores can be used to classify experimental outcomes as "good" or "bad" groups.
Experimentally, we did the 1000 experiments. Using my predictive model, I have 1000 scores.
To test if my predictive model statistically acceptable, what should I do? I have done ROC and sensitivity test for these 1000 X 2 data.
ROC were plotted for all 1000 experimental data and predictive scores. By looking at the AUC values for the plot (sensitity vs 1-specificity), AUC=0.64.
Let's said if my predictive score has a cut off value of 5, i.e. it is likelihood that the experimental outcome will be "good", score > 5 are likely to have "bad" experimental outcome. I calculate the enrichment of my predictive model, i.e. no. of real "good" results / no. of predictive score < 5.
Did I do anything wrong here?
What else should I do to check the predictive power of a model?