0

Reading the definition of MLE, it sounds like it is: "Given a likelihood function, estimate the most likely parameters."

When I read that, it sounds like it has the same goal of what backpropagation does in neural networks - find the most likely parameters given the "likelihood function" of the neural network.

My question is: Are these 2 concepts the same except for the following 2 differences?

  1. MLE can be applied to anything vs. backprop is only for neural networks.
  2. Backprop works backwards to find the best parameters while MLE works forwards.

Any thoughts would be greatly appreciated - the less technical the better since I am still in the learning phase. Thanks!

Katsu
  • 911
  • 1
    Backpropagation is used in gradient descent, so I marked it as a duplicate of a question asking about it. TL;DR it's comparing apples to oranges. – Tim Oct 21 '22 at 18:46
  • 1
    @katsu Maximum Likelihood (ML) is "what" you are doing, but not "how" you are doing it. Doing ML estimation might not require any numerical optimization; in some circumstances the ML estimate (MLE) is known in closed form and doesn't need any optimization. In other circumstances you might need to do numerical optimization like Newton Raphson or Gradient Descent. Such methods require derivatives. Back-propagation is a way to get derivatives in numerical optimization. as Tim said, these terms are comparing apples to oranges. – bdeonovic Oct 21 '22 at 19:40

0 Answers0