I am working on a regression task where the target variable cannot be strictly negative. To do the predictions, I am using the LGBM framework (Python) with the RMSE loss. Issue that I am facing is that some of the predictions are negative. I understand that I could apply a log-transformation to the target variable before training the model and making predictions, and then revert to the original one by applying the exponential function.
But I also read that it could be possible to write a custom loss function to deal with this. So my question is what kind of loss function could do such a role ? Can a loss function prevent by itself the prediction of negative values ?
Note: more generally, if you have knowledge/resources which deals with strategies to prevent negative predictions in regression problems (preferably with GBM) I would be glad if you can share them.