0

I'm using random forest for generating a predictions on the regression. I've seen confidence interval constructed from evaluation metric of models instead of directly from the model prediction. Is it possible for me to bound each of my predictions in my test set with a confidence interval? My additional questions are:

i) If it's possible, can you please put in detail the step-by-step, while being conscious about train/test splitting in model building? ii) if no, how to quantify the uncertainty in regression results?

Student
  • 375
  • Are you asking for a confidence interval or for a prediction interval? – user2974951 Dec 09 '22 at 07:21
  • @user2974951, I want to quantify how uncertain my prediction is. I need a confidence interval bounding my regression predictions. Does it exist? – Student Dec 09 '22 at 08:07
  • That's a prediction interval then, not a confidence interval. RF can return these, it depends on the implementation. – user2974951 Dec 09 '22 at 08:56
  • @user2974951, thanks for pointing me to the right direction! I read something like "Confidence intervals are constructed for model parameters.", but I come across post calculating 'confidence intervals' on model evaluation metrics like auc, f1, mse, etc. Do you think it's legit? Or the use of 'confidence interval' should be restricted to only estimating uncertainties in model parameters only? – Student Dec 09 '22 at 10:20
  • I would stick to using CI only for parameters (in a frequentist approach to be more precise), check also https://robjhyndman.com/hyndsight/intervals/ and What is the difference between estimation and prediction?. – user2974951 Dec 09 '22 at 10:29
  • @user2974951, great references! Thanks a lot! – Student Dec 09 '22 at 13:25

0 Answers0