2

I'm trying to learn a neural network with stop criteria of difference between error on current and previous iteration. But sometimes error starts growing instead of decreasing and only the difference between current and previous errors is not enough to deal with this situation.

Are there known or best techniques to eliminate this problem?

Now I'm doing something like this - I'm checking if error on current iteration is bigger than on previous one, and if yes count this. When it happens N times in a row, I stop learning. But it doesn't seems like 100% best solution

  • 2
    Are you talking about error on training or test set? You may want to check out https://en.wikipedia.org/wiki/Early_stopping If you use stochastic descent you will always have fluctuations of error rate on training set, but it will always decrease in the long run. What you want to find is the minimal test error rate – Łukasz Grad Feb 23 '17 at 11:44
  • @ŁukaszGrad I'm talking about error on test set. Thanks for link – kianu reeves Feb 23 '17 at 14:09
  • In general when your model starts to overfit test and training errors start to diverge, i.e. training error drops while test error increases, and this should be an okay stopping criterion. – Łukasz Grad Feb 23 '17 at 14:19
  • @FranckDernoncourt But that link doesn't give satisfactory answer, just a question. – SmallChess Feb 23 '17 at 23:01
  • @StudentT Isn't the question the same? I think that is the criteria to close as a duplicate. – Franck Dernoncourt Feb 23 '17 at 23:16

0 Answers0