I ran regressions and random forests using log loss as scoring metric, as suggested here and here. I was reading this which was linked in the second reference, and I started doubting:
Is log loss/cross entropy the same, in practice, as the logarithmic scoring rule?
According to their concept, they should be similar:
"The logarithmic rule gives more credit to extreme predictions that are “right”" (about logarithmic score).
"Log loss penalizes both types of errors, but especially those predictions that are confident and wrong" (here, about cross entropy)