1

In theory, is it possible that the loss on the train set will increase together with the train set accuracy? (in the normal case the train set loss should go down)
And vice versa, can the train loss decrease along with the accuracy?

Note: This is obviously different than the classic underfit / overfit cases which are easier to grasp.

Jjang
  • 241
  • Certainly it is possible that loss decreases and accuracy stays the same (loss defined in terms of probabilities vs discrete accuracy). – Tim Feb 21 '18 at 17:24
  • But why? Can you provide an example please? – Jjang Feb 21 '18 at 17:35
  • E.g. target is [0, 1] and predicted probabilities are [0.1, 0.1] in first case and [0.45, 0.45] in second case. Say that you use >0.5 decision rule for classification, so the classifications won't change (and so accuracy), but loss possibly will (say log-loss or squared loss). – Tim Feb 21 '18 at 18:40
  • @Tim I've copied your comments to an answer. If you would like to write your own answer, please let me know so that I can delete mine. – Sycorax Jul 05 '18 at 15:05

1 Answers1

1

In comments, @Tim writes:

Certainly it is possible that loss decreases and accuracy stays the same (loss defined in terms of probabilities vs discrete accuracy). E.g. target is [0, 1] and predicted probabilities are [0.1, 0.1] in first case and [0.45, 0.45] in second case. Say that you use >0.5 decision rule for classification, so the classifications won't change (and so accuracy), but loss possibly will (say log-loss or squared loss).


I've copied this comment as a community wiki answer because the comment is, more or less, an answer to this question. We have a dramatic gap between answers and questions. At least part of the problem is that some questions are answered in comments: if comments which answered the question were answers instead, we would have fewer unanswered questions.

Please review

Sycorax
  • 90,934