Basically I have a case where under-predictions are worse than over-predictions. Is there a way to penalize the linear regression model during training according to some predefined ratio?
E.g. I want to define that, for an actual value 10, predicting 9 and predicting 12 has equivalent penalty. (And not 9 and 11 as per default).
I guess my question is, is this something that is acceptable to do in the first place, and how would one best go about it?
PS. Maybe there is a reasonable approximation to solving this - without meddling with the least squares function. E.g. I've tried increasing the y (output) values by 1-2% and that moves me in the right direction, though requiring to incrementally test for the best % increase (not the worst thing..).