It is well agreed (for example this discussion ) that Logistic Regression guarantees that model will produce well calibrated in the large (mean) predictions.
Does Logistic Regression guarantees that calibration-in-the-small is good too, or not necessarily?
If calibration-in-the-small is not guaranteed to be good (or in case when it is not good), what is known about the reasons?
Are there known methods to improve calibration-in-the-small via some modified loss function (or some other ways) while training LR model, or this is only solved by using some calibration function/transformation that is applied on outputs from prediction?
Frank wrote: "This addresses only calibration-in-the-large which is not what we want: calibration-in-the-small. – Frank Harrell Aug 20 '18 at 12:35"
Let me ask different way : does logistic regression guarantee calibration-in-the-small? My experience is (on range of model types trained/fitted to very large amounts of data) - we get almost perfect calibration in the large, and ALWAYS severe lack of calibration in the small.
So I am trying to understand if we have explanation why in-the-small happens to be not calibrated and what are common techniques to solve this
– viggen Jan 18 '20 at 20:08